Re: [julia-users] Re: Fast vector element-wise multiplication

2016-11-01 Thread Sheehan Olver
Ah thanks!

Though I guess if I want the same code to work also on a GPU array then this 
won't help?

Sent from my iPhone

> On 2 Nov. 2016, at 13:51, Chris Rackauckas  wrote:
> 
> It's the other way around. .* won't fuse because it's still an operator. .= 
> will. It you want .* to fuse, you can instead do:
> 
> A .= *.(A,B)
> 
> since this invokes the broadcast on *, instead of invoking .*. But that's 
> just a temporary thing.
> 
> On Tuesday, November 1, 2016 at 7:27:40 PM UTC-7, Tom Breloff wrote:
>> 
>> As I understand it, the .* will fuse, but the .= will not (until 0.6?), so A 
>> will be rebound to a newly allocated array.  If my understanding is wrong 
>> I'd love to know.  There have been many times in the last few days that I 
>> would have used it...
>> 
>>> On Tue, Nov 1, 2016 at 10:06 PM, Sheehan Olver  wrote:
>>> Ah, good point.  Though I guess that won't work til 0.6 since .* won't 
>>> auto-fuse yet? 
>>> 
>>> Sent from my iPhone
>>> 
 On 2 Nov. 2016, at 12:55, Chris Rackauckas  wrote:
 
 This is pretty much obsolete by the . fusing changes:
 
 A .= A.*B
 
 should be an in-place update of A scaled by B (Tomas' solution).
 
> On Tuesday, November 1, 2016 at 4:39:15 PM UTC-7, Sheehan Olver wrote:
> Should this be added to a package?  I imagine if the arrays are on the 
> GPU (AFArrays) then the operation could be much faster, and having a 
> consistent name would be helpful.
> 
> 
>> On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux 
>> wrote:
>> Dear all,
>> 
>> I'm looking for the fastest way to do element-wise vector multiplication 
>> in Julia. The best I could have done is the following implementation 
>> which still runs 1.5x slower than the dot product. I assume the dot 
>> product would include such an operation ... and then do a cumulative sum 
>> over the element-wise product.
>> 
>> The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS 
>> does not. So my question is :
>> 
>> 1) is there any chance I can do vector element-wise multiplication 
>> faster then the actual dot product ?
>> 2) why the built-in element-wise multiplication operator (*.) is much 
>> slower than my own implementation for such a basic linealg operation 
>> (full julia) ? 
>> 
>> Thank you,
>> Lionel
>> 
>> Best custom implementation :
>> 
>> function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
>>   n = size(A)[1]
>>   if n == size(B)[1]
>> for i=1:n
>>   @inbounds A[i] *= B[i]
>> end
>>   end
>>   return A
>> end
>> 
>> Bench mark results (JuliaBox, A = randn(30) :
>> 
>> function  CPU (s) GC (%)  ALLOCATION (bytes) 
>>  CPU (x) 
>> dot(A,B)  1.58e-040.0016 
>>  1.0 
>> xpy!(A,B) 2.31e-040.0080 
>>  1.5 
>> NumericExtensions.multiply!(P,Q)  3.60e-040.0080 
>>  2.3 
>> xpy!(A,B) - no @inbounds check4.36e-040.0080 
>>  2.8 
>> P.*Q  2.52e-0350.36   2400512
>>  16.0
>> 
>> 


Re: [julia-users] Re: Fast vector element-wise multiplication

2016-11-01 Thread Chris Rackauckas
It's the other way around. .* won't fuse because it's still an operator. .= 
will. It you want .* to fuse, you can instead do:

A .= *.(A,B)

since this invokes the broadcast on *, instead of invoking .*. But that's 
just a temporary thing.

On Tuesday, November 1, 2016 at 7:27:40 PM UTC-7, Tom Breloff wrote:
>
> As I understand it, the .* will fuse, but the .= will not (until 0.6?), so 
> A will be rebound to a newly allocated array.  If my understanding is wrong 
> I'd love to know.  There have been many times in the last few days that I 
> would have used it...
>
> On Tue, Nov 1, 2016 at 10:06 PM, Sheehan Olver  > wrote:
>
>> Ah, good point.  Though I guess that won't work til 0.6 since .* won't 
>> auto-fuse yet? 
>>
>> Sent from my iPhone
>>
>> On 2 Nov. 2016, at 12:55, Chris Rackauckas > > wrote:
>>
>> This is pretty much obsolete by the . fusing changes:
>>
>> A .= A.*B
>>
>> should be an in-place update of A scaled by B (Tomas' solution).
>>
>> On Tuesday, November 1, 2016 at 4:39:15 PM UTC-7, Sheehan Olver wrote:
>>>
>>> Should this be added to a package?  I imagine if the arrays are on the 
>>> GPU (AFArrays) then the operation could be much faster, and having a 
>>> consistent name would be helpful.
>>>
>>>
>>> On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux 
>>> wrote:

 Dear all,

 I'm looking for the fastest way to do element-wise vector 
 multiplication in Julia. The best I could have done is the following 
 implementation which still runs 1.5x slower than the dot product. I assume 
 the dot product would include such an operation ... and then do a 
 cumulative sum over the element-wise product.

 The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS 
 does not. So my question is :

 1) is there any chance I can do vector element-wise multiplication 
 faster then the actual dot product ?
 2) why the built-in element-wise multiplication operator (*.) is much 
 slower than my own implementation for such a basic linealg operation (full 
 julia) ? 

 Thank you,
 Lionel

 Best custom implementation :

 function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
   n = size(A)[1]
   if n == size(B)[1]
 for i=1:n
   @inbounds A[i] *= B[i]
 end
   end
   return A
 end

 Bench mark results (JuliaBox, A = randn(30) :

 function  CPU (s) GC (%)  ALLOCATION (bytes)  
 CPU (x) 
 dot(A,B)  1.58e-040.0016  
 1.0 xpy!(A,B) 2.31e-040.0080   
1.5 
 NumericExtensions.multiply!(P,Q)  3.60e-040.0080  
 2.3 xpy!(A,B) - no @inbounds check4.36e-040.0080   
2.8 
 P.*Q  2.52e-0350.36   2400512 
 16.0
 


>

Re: [julia-users] Re: Fast vector element-wise multiplication

2016-11-01 Thread Tom Breloff
As I understand it, the .* will fuse, but the .= will not (until 0.6?), so
A will be rebound to a newly allocated array.  If my understanding is wrong
I'd love to know.  There have been many times in the last few days that I
would have used it...

On Tue, Nov 1, 2016 at 10:06 PM, Sheehan Olver 
wrote:

> Ah, good point.  Though I guess that won't work til 0.6 since .* won't
> auto-fuse yet?
>
> Sent from my iPhone
>
> On 2 Nov. 2016, at 12:55, Chris Rackauckas  wrote:
>
> This is pretty much obsolete by the . fusing changes:
>
> A .= A.*B
>
> should be an in-place update of A scaled by B (Tomas' solution).
>
> On Tuesday, November 1, 2016 at 4:39:15 PM UTC-7, Sheehan Olver wrote:
>>
>> Should this be added to a package?  I imagine if the arrays are on the
>> GPU (AFArrays) then the operation could be much faster, and having a
>> consistent name would be helpful.
>>
>>
>> On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux
>> wrote:
>>>
>>> Dear all,
>>>
>>> I'm looking for the fastest way to do element-wise vector multiplication
>>> in Julia. The best I could have done is the following implementation which
>>> still runs 1.5x slower than the dot product. I assume the dot product would
>>> include such an operation ... and then do a cumulative sum over the
>>> element-wise product.
>>>
>>> The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS
>>> does not. So my question is :
>>>
>>> 1) is there any chance I can do vector element-wise multiplication
>>> faster then the actual dot product ?
>>> 2) why the built-in element-wise multiplication operator (*.) is much
>>> slower than my own implementation for such a basic linealg operation (full
>>> julia) ?
>>>
>>> Thank you,
>>> Lionel
>>>
>>> Best custom implementation :
>>>
>>> function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
>>>   n = size(A)[1]
>>>   if n == size(B)[1]
>>> for i=1:n
>>>   @inbounds A[i] *= B[i]
>>> end
>>>   end
>>>   return A
>>> end
>>>
>>> Bench mark results (JuliaBox, A = randn(30) :
>>>
>>> function  CPU (s) GC (%)  ALLOCATION (bytes)  
>>> CPU (x)
>>> dot(A,B)  1.58e-040.0016  
>>> 1.0 xpy!(A,B) 2.31e-040.0080
>>>   1.5
>>> NumericExtensions.multiply!(P,Q)  3.60e-040.0080  
>>> 2.3 xpy!(A,B) - no @inbounds check4.36e-040.0080
>>>   2.8
>>> P.*Q  2.52e-0350.36   2400512 
>>> 16.0
>>> 
>>>
>>>


Re: [julia-users] Re: Webapp Deployment

2016-11-01 Thread Reuben Brooks
Hi Shashi,

I was following example 1 at escher-jl.org:

hello.jl contains:

function main(window)
plaintext("Hello, World!")
end

I get a similar result for most of the example files. Some don't show 
anything, just a blank screen; I don't remember which was which.

-Reuben

On Tuesday, November 1, 2016 at 2:23:49 PM UTC-5, Shashi Gowda wrote:
>
> Hi Reuben,
>
> what's in hello.jl ? There isn't a examples/hello.jl in Escher is there?
>
> A file you are trying to serve should end with a function definition such 
> as:
>
> function main(window) # must take an argumentend
>
> And this function should return the UI object you want to render.
>
> On Tue, Nov 1, 2016 at 9:50 PM, wookyoung noh  > wrote:
>
>> Hello, I'm a developer of Bukdu.
>> https://github.com/wookay/Bukdu.jl
>>
>> it's an web development framework on top of HttpServer.jl
>> Thanks!
>>
>> On Tuesday, November 1, 2016 at 1:08:01 PM UTC+9, Reuben Brooks wrote:
>>>
>>> Context: I love julia, and I've never built any kind of webapp. Most of 
>>> my programming experience is in Mathematica and Julia...hacking things 
>>> together (poorly) in Python when nothing else works.
>>>
>>> Problem: I have a script  / notebook in julia that pulls data from 
>>> sources, analyzes it, builds fancy plots, and has lots of nice information. 
>>> Now I want to build a basic webapp that will allow me to access this 
>>> information anywhere, anytime (will be updated regularly). 
>>>
>>> Question 1: is there a julia package that suits my needs well, or should 
>>> I look at using some other fronted to create the frontend? Elm intrigues 
>>> me, as much for the learning as for the actual solution. 
>>>
>>> Bottom line: I don't know enough about what I'm wading into to choose 
>>> wisely. What does the community suggest?
>>>
>>
>

Re: [julia-users] Re: Fast vector element-wise multiplication

2016-11-01 Thread Sheehan Olver
Ah, good point.  Though I guess that won't work til 0.6 since .* won't 
auto-fuse yet? 

Sent from my iPhone

> On 2 Nov. 2016, at 12:55, Chris Rackauckas  wrote:
> 
> This is pretty much obsolete by the . fusing changes:
> 
> A .= A.*B
> 
> should be an in-place update of A scaled by B (Tomas' solution).
> 
>> On Tuesday, November 1, 2016 at 4:39:15 PM UTC-7, Sheehan Olver wrote:
>> Should this be added to a package?  I imagine if the arrays are on the GPU 
>> (AFArrays) then the operation could be much faster, and having a consistent 
>> name would be helpful.
>> 
>> 
>>> On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux wrote:
>>> Dear all,
>>> 
>>> I'm looking for the fastest way to do element-wise vector multiplication in 
>>> Julia. The best I could have done is the following implementation which 
>>> still runs 1.5x slower than the dot product. I assume the dot product would 
>>> include such an operation ... and then do a cumulative sum over the 
>>> element-wise product.
>>> 
>>> The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS does 
>>> not. So my question is :
>>> 
>>> 1) is there any chance I can do vector element-wise multiplication faster 
>>> then the actual dot product ?
>>> 2) why the built-in element-wise multiplication operator (*.) is much 
>>> slower than my own implementation for such a basic linealg operation (full 
>>> julia) ? 
>>> 
>>> Thank you,
>>> Lionel
>>> 
>>> Best custom implementation :
>>> 
>>> function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
>>>   n = size(A)[1]
>>>   if n == size(B)[1]
>>> for i=1:n
>>>   @inbounds A[i] *= B[i]
>>> end
>>>   end
>>>   return A
>>> end
>>> 
>>> Bench mark results (JuliaBox, A = randn(30) :
>>> 
>>> function  CPU (s) GC (%)  ALLOCATION (bytes)  
>>> CPU (x) 
>>> dot(A,B)  1.58e-040.0016  
>>> 1.0 
>>> xpy!(A,B) 2.31e-040.0080  
>>> 1.5 
>>> NumericExtensions.multiply!(P,Q)  3.60e-040.0080  
>>> 2.3 
>>> xpy!(A,B) - no @inbounds check4.36e-040.0080  
>>> 2.8 
>>> P.*Q  2.52e-0350.36   2400512 
>>> 16.0
>>> 


[julia-users] Re: Fast vector element-wise multiplication

2016-11-01 Thread Chris Rackauckas
This is pretty much obsolete by the . fusing changes:

A .= A.*B

should be an in-place update of A scaled by B (Tomas' solution).

On Tuesday, November 1, 2016 at 4:39:15 PM UTC-7, Sheehan Olver wrote:
>
> Should this be added to a package?  I imagine if the arrays are on the GPU 
> (AFArrays) then the operation could be much faster, and having a consistent 
> name would be helpful.
>
>
> On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux wrote:
>>
>> Dear all,
>>
>> I'm looking for the fastest way to do element-wise vector multiplication 
>> in Julia. The best I could have done is the following implementation which 
>> still runs 1.5x slower than the dot product. I assume the dot product would 
>> include such an operation ... and then do a cumulative sum over the 
>> element-wise product.
>>
>> The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS does 
>> not. So my question is :
>>
>> 1) is there any chance I can do vector element-wise multiplication faster 
>> then the actual dot product ?
>> 2) why the built-in element-wise multiplication operator (*.) is much 
>> slower than my own implementation for such a basic linealg operation (full 
>> julia) ? 
>>
>> Thank you,
>> Lionel
>>
>> Best custom implementation :
>>
>> function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
>>   n = size(A)[1]
>>   if n == size(B)[1]
>> for i=1:n
>>   @inbounds A[i] *= B[i]
>> end
>>   end
>>   return A
>> end
>>
>> Bench mark results (JuliaBox, A = randn(30) :
>>
>> function  CPU (s) GC (%)  ALLOCATION (bytes)  
>> CPU (x) 
>> dot(A,B)  1.58e-040.0016  
>> 1.0 xpy!(A,B) 2.31e-040.0080 
>>  1.5 
>> NumericExtensions.multiply!(P,Q)  3.60e-040.0080  
>> 2.3 xpy!(A,B) - no @inbounds check4.36e-040.0080 
>>  2.8 
>> P.*Q  2.52e-0350.36   2400512 
>> 16.0
>> 
>>
>>

[julia-users] Re: Fast vector element-wise multiplication

2016-11-01 Thread Sheehan Olver
Should this be added to a package?  I imagine if the arrays are on the GPU 
(AFArrays) then the operation could be much faster, and having a consistent 
name would be helpful.


On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux wrote:
>
> Dear all,
>
> I'm looking for the fastest way to do element-wise vector multiplication 
> in Julia. The best I could have done is the following implementation which 
> still runs 1.5x slower than the dot product. I assume the dot product would 
> include such an operation ... and then do a cumulative sum over the 
> element-wise product.
>
> The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS does 
> not. So my question is :
>
> 1) is there any chance I can do vector element-wise multiplication faster 
> then the actual dot product ?
> 2) why the built-in element-wise multiplication operator (*.) is much 
> slower than my own implementation for such a basic linealg operation (full 
> julia) ? 
>
> Thank you,
> Lionel
>
> Best custom implementation :
>
> function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
>   n = size(A)[1]
>   if n == size(B)[1]
> for i=1:n
>   @inbounds A[i] *= B[i]
> end
>   end
>   return A
> end
>
> Bench mark results (JuliaBox, A = randn(30) :
>
> function  CPU (s) GC (%)  ALLOCATION (bytes)  CPU 
> (x) 
> dot(A,B)  1.58e-040.0016  1.0 
> xpy!(A,B) 2.31e-040.0080  
> 1.5 
> NumericExtensions.multiply!(P,Q)  3.60e-040.0080  2.3 
> xpy!(A,B) - no @inbounds check4.36e-040.0080  
> 2.8 
> P.*Q  2.52e-0350.36   2400512 
> 16.0
> 
>
>

Re: [julia-users] Question: Forcing readtable to create string type on import

2016-11-01 Thread LeAnthony Mathews
Great, that worked for forcing the column into a string type.
Thanks

On Monday, October 31, 2016 at 3:26:14 PM UTC-4, Jacob Quinn wrote:
>
> You could use CSV.jl: http://juliadata.github.io/CSV.jl/stable/
>
> In this case, you'd do:
>
> df1 = CSV.read(file1; types=Dict(1=>String)) # assuming your account 
> number is column # 1
> df2 = CSV.read(file2; types=Dict(1=>String))
>
> -Jacob
>
>
> On Mon, Oct 31, 2016 at 12:50 PM, LeAnthony Mathews  > wrote:
>
>> Using v0.5.0
>> I have two different 10,000 line CSV files that I am reading into two 
>> different dataframe variables using the readtable function.
>> Each table has in common a ten digit account_number that I would like to 
>> use as an index and join into one master file.
>>
>> Here is the account number example in the original CSV from file1:
>> 8018884596
>> 8018893530
>> 8018909633
>>
>> When I do a readtable of this CSV into file1 then do a* 
>> typeof(file1[:account_number])* I get:
>> *DataArrays.DataArray(Int32,1)*
>>  -571049996
>>  -571041062
>>  -571024959
>>
>> when I do a 
>> *typeof(file2[:account_number])*
>> *DataArrays.DataArray(String,1)*
>>
>>
>> *Question:  *
>> My CSV files give no guidance that account_number should be Int32 or 
>> string type.  How do I force it to make both account_number elements type 
>> String?
>>
>> I would like this join command to work:
>> *new_account_join = join(file1, file2, on =:account_number,kind = :left)*
>>
>> But I am getting this error:
>> *ERROR: TypeError: typeassert: expected Union{Array{Symbol,1},Symbol}, 
>> got Array{*
>> *Array{Symbol,1},1}*
>> * in (::Base.#kw##join)(::Array{Any,1}, ::Base.#join, 
>> ::DataFrames.DataFrame, ::D*
>> *ataFrames.DataFrame) at .\:0*
>>
>>
>> Any help would be appreciated.  
>>
>>
>>
>

Re: [julia-users] Reducing complexity of OffsetArrays

2016-11-01 Thread Bob Portmann
Yes. There is no question that there is tremendous potential in the system
you have developed.

One last thing. I am surprised this does not error in 0.5:
```
julia> function t(a::AbstractArray)
size(a)
end

julia> t(rand(5))
```
It seems to me if AbstractArray is going to represent all arrays then this
needs to error on all arrays and not just OffsetArrays and there ilk. If
people don't code against the abstract model then bug prone code will be
the result. If that means that they have to change the function to:
```
julia> function t(a::Array)
size(a)
end
```
until they are ready to fix the code then that is the price that must be
paid. The alternative is a new AbstractArray type as I outline above.

Cheers,
Bob

On Tue, Nov 1, 2016 at 3:41 PM, Tim Holy  wrote:

> There's still a simple model: the indices are the *key* of the entry.
> Think about an array as a very special Dict. You could create a Dict with
> integer keys, let's say from -2:5. Presumably you wouldn't be exactly happy
> if there were some magical way that the number you originally stored as
> `d[-1] = 3.2` were alternatively accessible as `d[2]`, simply because the
> smallest index was -2 and therefore 3.2 is the "second" entry?
>
> Like a Dict, for an array the value always goes with the key (the
> indices). Perhaps this will help:
> ```
> julia> using OffsetArrays
>
> julia> a = OffsetArray(rand(11), -5:5)
> OffsetArrays.OffsetArray{Float64,1,Array{Float64,1}} with indices -5:5:
>  0.815289
>  0.0043941
>  0.00403153
>  0.478065
>  0.150709
>  0.256156
>  0.934703
>  0.672495
>  0.428721
>  0.242469
>  0.43742
>
> julia> idx = OffsetArray(-1:1, -1:1)
> OffsetArrays.OffsetArray{Int64,1,UnitRange{Int64}} with indices -1:1:
>  -1
>   0
>   1
>
> julia> b = a[idx]
> OffsetArrays.OffsetArray{Float64,1,Array{Float64,1}} with indices -1:1:
>  0.150709
>  0.256156
>  0.934703
>
> julia> a[-1]
> 0.15070935766983662
>
> julia> b[-1]
> 0.15070935766983662
> ```
> So indexing `b = a[idx]` means that `b[j] = a[idx[j]]`. Does that help?
>
> Best,
> --Tim
>
> On Tue, Nov 1, 2016 at 3:36 PM, Bob Portmann 
> wrote:
>
>> Like I said, no real practical experience yet. The increase in complexity
>> that I fear is the loss of, e.g., writing arr[2,3] and having it not be the
>> element in the 2nd row and third column (i.e., the loss of a simple model
>> of how things are laid out). Maybe my fears are unfounded. Others don't
>> seem concerned it would seem.
>>
>> I'll check out those packages that you mention.
>>
>> Thanks,
>> Bob
>>
>> On Sun, Oct 30, 2016 at 2:29 PM, Tim Holy  wrote:
>>
>>> I'm afraid I still don't understand the claimed big increment in
>>> complexity. First, let's distinguish "generic offset arrays" from the
>>> OffsetArrays.jl package. If you're happy using OffsetArrays, you don't have
>>> to write your own offset-array type. Being able to use an established &
>>> tested package reduces your burden a lot, and you can ignore the second
>>> half of the devdocs page entirely.
>>>
>>> If you just want to *use* OffsetArrays.jl, the basic changes in coding
>>> style for writing indices-aware code are:
>>>
>>> - any place you used to call `size`, you probably want to call `indices`
>>> instead (and likely make minor adjustments elsewhere, since `incides`
>>> returns a tuple-of-ranges---but such changes tend to be very obvious);
>>> - check all uses of `similar`; some will stay as-is, other will migrate
>>> to `similar(f, inds)` style.
>>>
>>> In my experience, that's just about it. The devdocs goes into quite a
>>> lot of detail to explain the rationale, but really the actual changes are
>>> quite small. While you can't quite do it via `grep` and `sed`, to me that
>>> just doesn't seem complicated.
>>>
>>> Where the pain comes is that if you're converting old code, you
>>> sometimes have to think your way through it again---"hmm, what do I really
>>> mean by this index"? If your code had complicated indexing the first time
>>> you wrote it, unfortunately you're going to have to think about it
>>> carefully again; so in some cases, "porting" code is almost as bad as
>>> writing it the first time. However, if you write indices-aware code in the
>>> first place, in my experience the added burden is almost negligible, and in
>>> quite a few cases the ability to offset array indices makes things *easier*
>>> (e.g., "padding" an array on its edges is oh-so-much-clearer than it used
>>> to be, it's like a different world). That's the whole reason I implemented
>>> this facility in julia-0.5: to make life easier, not to make it harder.
>>> (Personally I think the whole argument over 0-based and 1-based indexing is
>>> stupid; it's the ability to use arbitrary indices that I find interesting &
>>> useful, and it makes most of my code prettier.)
>>>
>>> For examples of packages that use OffsetArrays, check the following:
>>> - CatIndices
>>> - FFTViews
>>> - ImageFiltering
>>>
>>> 

Re: [julia-users] Barnes-Hut N-body simulations (was recursive data structures with Julia)

2016-11-01 Thread Angel de Vicente
Hi,

Angel de Vicente  writes:
> Being used to nullify a pointer for this, I'm not sure how to best
> proceed in Julia. Is there a better way to build recursive data
> structures? 

OK, so this was just a test example to go for something bigger, and by
using the cleaner version with Nullable fields I implemented a basic
code to perform a Barnes-Hut N-body simulation
(https://en.wikipedia.org/wiki/Barnes%E2%80%93Hut_simulation)

The good think is that it works OK, and development while being able to
use the REPL accelerates coding so much (in comparison to compiled
languages), but the bad news is that it is ~25x slower than my own
Fortran version :-( (I have tried to make all functions type stable and
I have followed the same algorithm as in the Fortran version).

I haven't done any code profiling yet, so I don't know which parts of
the code are so expensive, but I will try to investigate these days...

Any pointers/help on how to best proceed to identify bottlenecks and how
to get rid of them in Julia are most welcome.

Thanks,
-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/  


Re: [julia-users] Reducing complexity of OffsetArrays

2016-11-01 Thread Tim Holy
There's still a simple model: the indices are the *key* of the entry. Think
about an array as a very special Dict. You could create a Dict with integer
keys, let's say from -2:5. Presumably you wouldn't be exactly happy if
there were some magical way that the number you originally stored as `d[-1]
= 3.2` were alternatively accessible as `d[2]`, simply because the smallest
index was -2 and therefore 3.2 is the "second" entry?

Like a Dict, for an array the value always goes with the key (the indices).
Perhaps this will help:
```
julia> using OffsetArrays

julia> a = OffsetArray(rand(11), -5:5)
OffsetArrays.OffsetArray{Float64,1,Array{Float64,1}} with indices -5:5:
 0.815289
 0.0043941
 0.00403153
 0.478065
 0.150709
 0.256156
 0.934703
 0.672495
 0.428721
 0.242469
 0.43742

julia> idx = OffsetArray(-1:1, -1:1)
OffsetArrays.OffsetArray{Int64,1,UnitRange{Int64}} with indices -1:1:
 -1
  0
  1

julia> b = a[idx]
OffsetArrays.OffsetArray{Float64,1,Array{Float64,1}} with indices -1:1:
 0.150709
 0.256156
 0.934703

julia> a[-1]
0.15070935766983662

julia> b[-1]
0.15070935766983662
```
So indexing `b = a[idx]` means that `b[j] = a[idx[j]]`. Does that help?

Best,
--Tim

On Tue, Nov 1, 2016 at 3:36 PM, Bob Portmann  wrote:

> Like I said, no real practical experience yet. The increase in complexity
> that I fear is the loss of, e.g., writing arr[2,3] and having it not be the
> element in the 2nd row and third column (i.e., the loss of a simple model
> of how things are laid out). Maybe my fears are unfounded. Others don't
> seem concerned it would seem.
>
> I'll check out those packages that you mention.
>
> Thanks,
> Bob
>
> On Sun, Oct 30, 2016 at 2:29 PM, Tim Holy  wrote:
>
>> I'm afraid I still don't understand the claimed big increment in
>> complexity. First, let's distinguish "generic offset arrays" from the
>> OffsetArrays.jl package. If you're happy using OffsetArrays, you don't have
>> to write your own offset-array type. Being able to use an established &
>> tested package reduces your burden a lot, and you can ignore the second
>> half of the devdocs page entirely.
>>
>> If you just want to *use* OffsetArrays.jl, the basic changes in coding
>> style for writing indices-aware code are:
>>
>> - any place you used to call `size`, you probably want to call `indices`
>> instead (and likely make minor adjustments elsewhere, since `incides`
>> returns a tuple-of-ranges---but such changes tend to be very obvious);
>> - check all uses of `similar`; some will stay as-is, other will migrate
>> to `similar(f, inds)` style.
>>
>> In my experience, that's just about it. The devdocs goes into quite a lot
>> of detail to explain the rationale, but really the actual changes are quite
>> small. While you can't quite do it via `grep` and `sed`, to me that just
>> doesn't seem complicated.
>>
>> Where the pain comes is that if you're converting old code, you sometimes
>> have to think your way through it again---"hmm, what do I really mean by
>> this index"? If your code had complicated indexing the first time you wrote
>> it, unfortunately you're going to have to think about it carefully again;
>> so in some cases, "porting" code is almost as bad as writing it the first
>> time. However, if you write indices-aware code in the first place, in my
>> experience the added burden is almost negligible, and in quite a few cases
>> the ability to offset array indices makes things *easier* (e.g., "padding"
>> an array on its edges is oh-so-much-clearer than it used to be, it's like a
>> different world). That's the whole reason I implemented this facility in
>> julia-0.5: to make life easier, not to make it harder. (Personally I think
>> the whole argument over 0-based and 1-based indexing is stupid; it's the
>> ability to use arbitrary indices that I find interesting & useful, and it
>> makes most of my code prettier.)
>>
>> For examples of packages that use OffsetArrays, check the following:
>> - CatIndices
>> - FFTViews
>> - ImageFiltering
>>
>> ImageFiltering is a mixed bag: there's a small performance penalty in a
>> few cases (even if you use @unsafe) because that LLVM doesn't always
>> optimize code as well as it could in principle (maybe @polly will help
>> someday...). Because image filtering is an extraordinarily
>> performance-sensitive operation, there are a few places where I had to make
>> some yucky hand optimizations.
>>
>> Again, I'm very happy to continue this conversation---I'd really like to
>> understand your concern, but without a concrete example I confess I'm lost.
>>
>> Best,
>> --Tim
>>
>>
>> On Sat, Oct 29, 2016 at 8:44 PM, Bob Portmann 
>> wrote:
>>
>>> Thanks for the thoughtful response. I hope you'll tolerate one reply in
>>> the "abstract".
>>>
>>> I am resisting the big change that occurred in 0.5. In 0.4 and earlier
>>> if one declares an array as an `AbstractArray` in a function then one knew
>>> that the indices were one based 

Re: [julia-users] Recursive data structures with Julia

2016-11-01 Thread Angel de Vicente
Hi Ralph,

Ralph Smith  writes:
> Conversion is done by methods listed in base/nullable.jl

OK, but more than the conversion rules I was wondering about when
conversion will be invoked. Conversion does not happen when calling a
function (so, in this example a function expecting a Nullable{BST} but
given a BST will not work), but it does happen when used in an
expression (as the one I mentioned node.left = BST(key) which gets
converted to Nullable{BST}). Not sure if there are any other subtleties
that I should be aware of.

Thanks,
-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/  


Re: [julia-users] Re: What's julia's answer to tapply or accumarray?

2016-11-01 Thread Peter Haverty
​It's great that you are making a collection of these. I see that you have
a vectorized searchsortedfirst (findInterval).  I also felt the need for
that one and have a version in RLEVectors.jl. I'll have a look at
VectorizedRoutines.jl to see what I can contribute.​

Pete


Peter M. Haverty, Ph.D.
Genentech, Inc.
phave...@gene.com

On Mon, Oct 31, 2016 at 8:06 PM, Chris Rackauckas 
wrote:

> For reference I've been gathering these kinds of "vectorized" functions
> in, well, VectorizedRoutines.jl
> . I am just
> trying to get an implementation of all of those vectorized routines you
> know and love since, in some cases, they lead to slick code. You can find
> an accumarray there. Feel free to add a PR that has more.
>
> On Monday, October 31, 2016 at 12:38:06 PM UTC-7, phav...@gene.com wrote:
>>
>> RLEVectors.jl  now has
>> a tapply function where an RLE is used as the factor.
>>
>>
>> On Thursday, March 20, 2014 at 10:46:33 AM UTC-7, James Johndrow wrote:
>>>
>>> I cannot seem to find a built-in julia function that performs the
>>> function of tapply in R or accumarray in matlab. Anyone know of one?
>>>
>>


[julia-users] Re: jl_stat_ctime

2016-11-01 Thread Jeffrey Sarnoff
If you are working with file information, use e.g. `file_stats=stat(); 
file_creation_time = file_stats.ctime; file_modification_time = 
file_stat.mtime;` 
You will get Float64 values, to make FineComputerTimes from those, 

function FineComputerTime(stat_time::Float64)
   nanosecs = round(UInt64, stat_time * 1.0e9)
   return FineComputerTime(nanosecs)
end




On Tuesday, November 1, 2016 at 3:44:07 PM UTC-4, Jeffrey Sarnoff wrote:
>
> Look at the help for tic() and toc().
> Do you care about interfacing directly with jl_ routines?  If not, and you 
> are trying to make your own harness ... perhaps this would help:
> #=
>Using immutable rather than type with fields that are 
>simple and immediate values keeps information directly
>available (rather than indirectly available, like arrays).
>
>Use Int64 because nanosecond timing uses 64 bits (UInt64).
>
>time_ns() "Get the time in nanoseconds. 
>   The time corresponding to 0 is undefined,
>   and wraps every 5.8 years."
>
>time_zero because the timer is given as a UInt64 value, and 
> there are more of those than positive Int64s.
> =#
>
> const time_zero = [time_ns()]
> get_time_zero() = time_zero[1]
> function set_time_zero(nanoseconds::UInt64)
> time_zero[1] = nanoseconds
> return nanoseconds
> end
>
> immutable FineComputerTime
> seconds::Int64
> nanoseconds::Int64
> end
>
> function FineComputerTime(nanosecs::UInt64)
> nanosecs -= get_time_zero()
> secs, nsecs = fldmod( nanosecs, 1_000_000_000%UInt64 ) # value%UInt64 
> is a fast way to force the type
> return FineComputerTime( Int64(secs), Int64(nsecs) )
> end
>
> FineComputerTime() = FineComputerTime(time_ns())
>
>
>
>
>
>
>
>
> On Friday, October 28, 2016 at 10:07:42 AM UTC-4, Brandon Taylor wrote:
>>
>> Right now in base jl_stat_ctime looks like this:
>>
>> JL_DLLEXPORT double jl_stat_ctime(char *statbuf)
>> {
>> uv_stat_t *s;
>> s = (uv_stat_t*)statbuf;
>> return (double)s->st_ctim.tv_sec + (double)s->st_ctim.tv_nsec * 1e-9;
>> }
>>
>> And it's called with
>>
>> ccall(:jl_stat_ctime,   Float64, (Ptr{UInt8},), buf)
>>
>> I'd like to simplify this.
>>
>> I'd like a type
>>
>> type FineComputerTime
>> seconds::Int
>> nanoseconds::Int
>> end
>>
>> And a way to fill it in using the stat buffer.
>>
>> Can anyone offer some tips? The c code keeps confusing me.
>>
>>
>>
>>
>> I
>>
>

Re: [julia-users] best way to reinstall packages after upgrading Julia?

2016-11-01 Thread Steven G. Johnson


On Tuesday, November 1, 2016 at 4:47:58 PM UTC-4, Yichao Yu wrote:
>
> I believe the current recommend way is to copy REQUIRED and run Pkg.update
>

Is this documented somewhere?  Would be nice to have an easy way to do it 
from the REPL prompt, without users having to know where these files are. 


Re: [julia-users] best way to reinstall packages after upgrading Julia?

2016-11-01 Thread Yichao Yu
On Nov 1, 2016 4:32 PM, "Steven G. Johnson"  wrote:
>
> When you upgrade from (say) Julia 0.4 to 0.5, you have to re-install all
of the packages because the package directory changes.   It seems like
there should be an automated way to do this.  Does something like this
exist?  Seems like it should be a built-in Pkg feature.
>
> (It would be straightforward to write a function that reads the REQUIRE
file from another Julia version and adds all of those packages.)

I believe the current recommend way is to copy REQUIRED and run Pkg.update


Re: [julia-users] Reducing complexity of OffsetArrays

2016-11-01 Thread Bob Portmann
Like I said, no real practical experience yet. The increase in complexity
that I fear is the loss of, e.g., writing arr[2,3] and having it not be the
element in the 2nd row and third column (i.e., the loss of a simple model
of how things are laid out). Maybe my fears are unfounded. Others don't
seem concerned it would seem.

I'll check out those packages that you mention.

Thanks,
Bob

On Sun, Oct 30, 2016 at 2:29 PM, Tim Holy  wrote:

> I'm afraid I still don't understand the claimed big increment in
> complexity. First, let's distinguish "generic offset arrays" from the
> OffsetArrays.jl package. If you're happy using OffsetArrays, you don't have
> to write your own offset-array type. Being able to use an established &
> tested package reduces your burden a lot, and you can ignore the second
> half of the devdocs page entirely.
>
> If you just want to *use* OffsetArrays.jl, the basic changes in coding
> style for writing indices-aware code are:
>
> - any place you used to call `size`, you probably want to call `indices`
> instead (and likely make minor adjustments elsewhere, since `incides`
> returns a tuple-of-ranges---but such changes tend to be very obvious);
> - check all uses of `similar`; some will stay as-is, other will migrate to
> `similar(f, inds)` style.
>
> In my experience, that's just about it. The devdocs goes into quite a lot
> of detail to explain the rationale, but really the actual changes are quite
> small. While you can't quite do it via `grep` and `sed`, to me that just
> doesn't seem complicated.
>
> Where the pain comes is that if you're converting old code, you sometimes
> have to think your way through it again---"hmm, what do I really mean by
> this index"? If your code had complicated indexing the first time you wrote
> it, unfortunately you're going to have to think about it carefully again;
> so in some cases, "porting" code is almost as bad as writing it the first
> time. However, if you write indices-aware code in the first place, in my
> experience the added burden is almost negligible, and in quite a few cases
> the ability to offset array indices makes things *easier* (e.g., "padding"
> an array on its edges is oh-so-much-clearer than it used to be, it's like a
> different world). That's the whole reason I implemented this facility in
> julia-0.5: to make life easier, not to make it harder. (Personally I think
> the whole argument over 0-based and 1-based indexing is stupid; it's the
> ability to use arbitrary indices that I find interesting & useful, and it
> makes most of my code prettier.)
>
> For examples of packages that use OffsetArrays, check the following:
> - CatIndices
> - FFTViews
> - ImageFiltering
>
> ImageFiltering is a mixed bag: there's a small performance penalty in a
> few cases (even if you use @unsafe) because that LLVM doesn't always
> optimize code as well as it could in principle (maybe @polly will help
> someday...). Because image filtering is an extraordinarily
> performance-sensitive operation, there are a few places where I had to make
> some yucky hand optimizations.
>
> Again, I'm very happy to continue this conversation---I'd really like to
> understand your concern, but without a concrete example I confess I'm lost.
>
> Best,
> --Tim
>
>
> On Sat, Oct 29, 2016 at 8:44 PM, Bob Portmann 
> wrote:
>
>> Thanks for the thoughtful response. I hope you'll tolerate one reply in
>> the "abstract".
>>
>> I am resisting the big change that occurred in 0.5. In 0.4 and earlier if
>> one declares an array as an `AbstractArray` in a function then one knew
>> that the indices were one based (a nice simple model, even if it is hated
>> by many). In 0.5, if one wants to write general code, then one has to
>> assume that arrays can have ANY indices. And one needs to write code using
>> more abstract tools. This seems to me to be a large cost in complexity for
>> the small subset of cases where offset indices are helpful. This is the
>> core issue to me.
>>
>> One way around this, it would seem, is to declare arrays as `::Array` and
>> not `::AbstractArray` in functions (if one want to be sure they are
>> `OneTo`). But then one gives up accepting many types of arrays that would
>> pose no problem (e.g, DistributedArrays with OneTo indices). Thus, I'm
>> proposing to have a high level abstract type that would capture all arrays
>> types that can be assumed to be `OneTo`. Then one can write library
>> functions against that type. This alone would help I think.
>>
>> The auto-conversion is an extra step that I thought might work since it
>> is (I think) low cost to convert an `OffsetArray` to a `OneTo` array . Thus
>> if you passed an OffsetArray to a function that takes the abstract OneTo
>> type (I kept its name AbstractArray above but that need not be its name)
>> you expect that if it returned a `similar` array it would be of `OneTo`
>> type. You would then have to convert it back to an OffsetArray type. It
>> 

[julia-users] best way to reinstall packages after upgrading Julia?

2016-11-01 Thread Steven G. Johnson
When you upgrade from (say) Julia 0.4 to 0.5, you have to re-install all of 
the packages because the package directory changes.   It seems like there 
should be an automated way to do this.  Does something like this exist? 
 Seems like it should be a built-in Pkg feature.

(It would be straightforward to write a function that reads the REQUIRE 
file from another Julia version and adds all of those packages.)


[julia-users] Re: jl_stat_ctime

2016-11-01 Thread Jeffrey Sarnoff
Look at the help for tic() and toc().
Do you care about interfacing directly with jl_ routines?  If not, and you 
are trying to make your own harness ... perhaps this would help:
#=
   Using immutable rather than type with fields that are 
   simple and immediate values keeps information directly
   available (rather than indirectly available, like arrays).

   Use Int64 because nanosecond timing uses 64 bits (UInt64).

   time_ns() "Get the time in nanoseconds. 
  The time corresponding to 0 is undefined,
  and wraps every 5.8 years."

   time_zero because the timer is given as a UInt64 value, and 
there are more of those than positive Int64s.
=#

const time_zero = [time_ns()]
get_time_zero() = time_zero[1]
function set_time_zero(nanoseconds::UInt64)
time_zero[1] = nanoseconds
return nanoseconds
end

immutable FineComputerTime
seconds::Int64
nanoseconds::Int64
end

function FineComputerTime(nanosecs::UInt64)
nanosecs -= get_time_zero()
secs, nsecs = fldmod( nanosecs, 1_000_000_000%UInt64 ) # value%UInt64 
is a fast way to force the type
return FineComputerTime( Int64(secs), Int64(nsecs) )
end

FineComputerTime() = FineComputerTime(time_ns())








On Friday, October 28, 2016 at 10:07:42 AM UTC-4, Brandon Taylor wrote:
>
> Right now in base jl_stat_ctime looks like this:
>
> JL_DLLEXPORT double jl_stat_ctime(char *statbuf)
> {
> uv_stat_t *s;
> s = (uv_stat_t*)statbuf;
> return (double)s->st_ctim.tv_sec + (double)s->st_ctim.tv_nsec * 1e-9;
> }
>
> And it's called with
>
> ccall(:jl_stat_ctime,   Float64, (Ptr{UInt8},), buf)
>
> I'd like to simplify this.
>
> I'd like a type
>
> type FineComputerTime
> seconds::Int
> nanoseconds::Int
> end
>
> And a way to fill it in using the stat buffer.
>
> Can anyone offer some tips? The c code keeps confusing me.
>
>
>
>
> I
>


Re: [julia-users] Re: Webapp Deployment

2016-11-01 Thread Shashi Gowda
Hi Reuben,

what's in hello.jl ? There isn't a examples/hello.jl in Escher is there?

A file you are trying to serve should end with a function definition such
as:

function main(window) # must take an argumentend

And this function should return the UI object you want to render.

On Tue, Nov 1, 2016 at 9:50 PM, wookyoung noh  wrote:

> Hello, I'm a developer of Bukdu.
> https://github.com/wookay/Bukdu.jl
>
> it's an web development framework on top of HttpServer.jl
> Thanks!
>
> On Tuesday, November 1, 2016 at 1:08:01 PM UTC+9, Reuben Brooks wrote:
>>
>> Context: I love julia, and I've never built any kind of webapp. Most of
>> my programming experience is in Mathematica and Julia...hacking things
>> together (poorly) in Python when nothing else works.
>>
>> Problem: I have a script  / notebook in julia that pulls data from
>> sources, analyzes it, builds fancy plots, and has lots of nice information.
>> Now I want to build a basic webapp that will allow me to access this
>> information anywhere, anytime (will be updated regularly).
>>
>> Question 1: is there a julia package that suits my needs well, or should
>> I look at using some other fronted to create the frontend? Elm intrigues
>> me, as much for the learning as for the actual solution.
>>
>> Bottom line: I don't know enough about what I'm wading into to choose
>> wisely. What does the community suggest?
>>
>


Re: [julia-users] Re: 0.5 new generators syntax question

2016-11-01 Thread Stefan Karpinski
When in doubt, use @benchmark!

On Tue, Nov 1, 2016 at 5:49 AM, Johan Sigfrids 
wrote:

> Given that allocating an array for 50 Ints, filling it up, and then
> summing it all together probably takes less than a microsecond, any
> difference between allocating and not allocating will disappears in the
> noise. As Yichao Yu mentions, what you end up measuring is the time it
> takes to setup the computation. This is a case where using the
> BenchmarkTools can be really helpful.
>
> julia> using BenchmarkTools
>
> julia> @benchmark sum([2*t for t in 1:2:100])
> BenchmarkTools.Trial:
>   samples:  1
>   evals/sample: 907
>   time tolerance:   5.00%
>   memory tolerance: 1.00%
>   memory estimate:  560.00 bytes
>   allocs estimate:  2
>   minimum time: 121.00 ns (0.00% GC)
>   median time:  131.00 ns (0.00% GC)
>   mean time:162.82 ns (10.91% GC)
>   maximum time: 2.45 μs (0.00% GC)
>
> julia> @benchmark sum(2*t for t in 1:2:100)
> BenchmarkTools.Trial:
>   samples:  1
>   evals/sample: 962
>   time tolerance:   5.00%
>   memory tolerance: 1.00%
>   memory estimate:  80.00 bytes
>   allocs estimate:  3
>   minimum time: 86.00 ns (0.00% GC)
>   median time:  90.00 ns (0.00% GC)
>   mean time:107.20 ns (6.99% GC)
>   maximum time: 3.64 μs (95.05% GC)
>
>
> On Monday, October 31, 2016 at 9:42:14 PM UTC+2, Jesse Jaanila wrote:
>>
>> Hi,
>>
>> I was experimenting with the new 0.5 features and they are great! But to
>> my surprise,
>> the generator syntax doesn't work as I'm expecting. Let's say I want to
>> calculate
>> some summation. With the old syntax I could do
>>
>> @time sum([2*t for t in 1:2:100])
>>   0.015104 seconds (13.80 k allocations: 660.366 KB)
>>
>> that allocates the array within the summation. Now I thought this would a
>> prime example
>> where the memory overhead could be decreased by using the new notation
>> i.e.
>>
>> @time sum(2*t for t in 1:2:100)
>>   0.019215 seconds (18.98 k allocations: 785.777 KB)
>>
>> ,but generator syntax performs slightly worse. Also if we want find the
>> maximum we would do
>>
>> julia> @time maximum([2*t for t in 1:2:100])
>>   0.015182 seconds (12.90 k allocations: 606.166 KB)
>> 198
>>
>> julia> @time maximum(2*t for t in 1:2:100)
>>   0.019935 seconds (18.74 k allocations: 772.180 KB)
>> 198
>>
>> Have I understood the new generator syntax incorrectly or should the new
>> syntax perform
>> better in these code snippet examples?
>>
>>
>>
>>
>>
>>
>>


[julia-users] Re: Error installing Atom.jl package

2016-11-01 Thread mmus
Consider posting at http://discuss.junolab.org/ 



On Tuesday, November 1, 2016 at 12:46:15 PM UTC-4, Joachim Inkmann wrote:
>
> Good day,
>
> I am new to Julia. I have installed Julia v0.50 and the latest version of 
> Atom on a Windows 7 computer. The uber-juno package is installed as well. 
> When I try to run Julia within Atom, I get the following error message:
>
>
>
> and a pop-up window showing the following
>
>
>
> When I try to run Pkg.add("Atom") in a terminal, I get the following error 
> message:
>
>
>
> Does anyone have an idea what I could do to solve this problem? Thanks a 
> lot.
>
> Regards, Joachim
>


[julia-users] Error installing Atom.jl package

2016-11-01 Thread Joachim Inkmann
Good day,

I am new to Julia. I have installed Julia v0.50 and the latest version of 
Atom on a Windows 7 computer. The uber-juno package is installed as well. 
When I try to run Julia within Atom, I get the following error message:



and a pop-up window showing the following



When I try to run Pkg.add("Atom") in a terminal, I get the following error 
message:



Does anyone have an idea what I could do to solve this problem? Thanks a 
lot.

Regards, Joachim


[julia-users] Re: Webapp Deployment

2016-11-01 Thread wookyoung noh
Hello, I'm a developer of Bukdu.
https://github.com/wookay/Bukdu.jl

it's an web development framework on top of HttpServer.jl
Thanks!

On Tuesday, November 1, 2016 at 1:08:01 PM UTC+9, Reuben Brooks wrote:
>
> Context: I love julia, and I've never built any kind of webapp. Most of my 
> programming experience is in Mathematica and Julia...hacking things 
> together (poorly) in Python when nothing else works.
>
> Problem: I have a script  / notebook in julia that pulls data from 
> sources, analyzes it, builds fancy plots, and has lots of nice information. 
> Now I want to build a basic webapp that will allow me to access this 
> information anywhere, anytime (will be updated regularly). 
>
> Question 1: is there a julia package that suits my needs well, or should I 
> look at using some other fronted to create the frontend? Elm intrigues me, 
> as much for the learning as for the actual solution. 
>
> Bottom line: I don't know enough about what I'm wading into to choose 
> wisely. What does the community suggest?
>


[julia-users] Re: Webapp Deployment

2016-11-01 Thread Alex Mellnik
Hi Reuben,

I largely work in this space.  I can walk through a few possible 
architectures that I have used:

1)  If the data pull and processing is fairly decoupled from the display, 
it is often easiest to use Julia only on the back end.  I have a few 
systems that pull new data every hour and add it to a database.  I then use 
Julia to do a bunch of processing and analysis, then load the results back 
into a different portion of the database.  You can then build a normal 
data-driven web app using standard tools that only looks at the database, 
and doesn't need to interface with Julia anywhere.  If you don't have any 
experience building web pages, I would suggest using Angular 1 and Plotly 
for the front end, and Node/Express for the back end.  Some basic data 
manipulation can be done via SQL if you use MySQL or similar, or in the web 
app itself using things like d3 and lodash.  I don't have any publicly 
available examples of this, but I could put one together if you like.

2) If the data pull and processing is strongly coupled to the data display, 
you can call Julia directly from a server-side web application rather than 
look at cached data.  You have a few options for the server-side code.  One 
is to call Julia from Node using node-julia 
. I have a rough example of how you 
would do this here .  One risk is 
that while node-julia works, it's a bit tricky to use, and I don't know 
what Jeff's plans for the package are.  You would again use a normal 
front-end tools like Angular/React for the front end.  

Alternatively, you could write the back-end in something like Mux.jl 
 rather than in Node.  I don't do this, 
because I need to use things like https and SSPI in an enterprise 
environment, but I think it should work fine.  

3) Lastly, you could write the whole thing in Julia using something like 
Escher or Genie.jl .  These are 
both very interesting projects and represent an incredible level of work, 
but I don't think they are ready for production use yet.  

I strongly recommend the first option if possible.  It might seem like a 
bunch of different parts that all need to work together, but I think it's 
actually the easiest to set up and maintain, and lets you use the use the 
best tools in each domain.  Failing that, try the 2nd.  

If you have any questions or would like to discuss this further just let me 
know.  Cheers,

Alex


On Monday, October 31, 2016 at 9:08:01 PM UTC-7, Reuben Brooks wrote:
>
> Context: I love julia, and I've never built any kind of webapp. Most of my 
> programming experience is in Mathematica and Julia...hacking things 
> together (poorly) in Python when nothing else works.
>
> Problem: I have a script  / notebook in julia that pulls data from 
> sources, analyzes it, builds fancy plots, and has lots of nice information. 
> Now I want to build a basic webapp that will allow me to access this 
> information anywhere, anytime (will be updated regularly). 
>
> Question 1: is there a julia package that suits my needs well, or should I 
> look at using some other fronted to create the frontend? Elm intrigues me, 
> as much for the learning as for the actual solution. 
>
> Bottom line: I don't know enough about what I'm wading into to choose 
> wisely. What does the community suggest?
>


[julia-users] Re: Webapp Deployment

2016-11-01 Thread Reuben Brooks
The dukes are definitely accessible, I get a different error if i try to 
open a nonexistent file. Some example files just simply do not load. I 
suspect it may be a v0.5 issue, as Tomas notes below. Will follow up on 
Github / gitter.

On Tuesday, November 1, 2016 at 3:17:18 AM UTC-5, Adrian Salceanu wrote:
>
> My experience with Escher is limited to reading the docs and looking at 
> the sources, but it seems to be related to loading the file (or it's 
> content): 
>
> function loadfile(filename)
> if isfile(filename)
> try
> ui = include(filename)
> if isa(ui, Function)
> return ui
> else
> warn("$filename did not return a function")
> return (w) -> Elem(:p, string(
> filename, " did not return a UI function"
> ))
> end
> catch err
> bt = backtrace()
> return (win) -> Elem(:pre, sprint() do io
> showerror(io, err)
> Base.show_backtrace(io, bt)
> end)
> end
> else
> return (w) -> Elem(:p, string(
> filename, " could not be found."
> ))
> end
> end
> in https://github.com/shashi/Escher.jl/blob/master/src/cli/serve.jl
>
> So maybe make sure the example files are accessible (readable)? 
>
> You can use the usual communication paths: a new issue in GitHub or 
> StackOverflow. Also, check if there's a Gitter channel for Escher. 
>
>
> marți, 1 noiembrie 2016, 09:38:11 UTC+2, Reuben Brooks a scris:
>>
>> When I try to run the examples or basic hello.jl file in Escher, always 
>> get this in browser: ".../Escher/examples/hello.jl did not return a UI 
>> function"
>>
>> I don't see any issues filed on github with this, suspect it's something 
>> on my end. What would be the appropriate channel for me to get some help on 
>> this?
>>
>> On Tuesday, November 1, 2016 at 1:10:18 AM UTC-5, Adrian Salceanu wrote:
>>>
>>> Sounds like the answer is https://github.com/shashi/Escher.jl 
>>>
>>> It was built exactly for your use case and it's actually inspired by Elm 
>>>
>>>
>>>
>>> marți, 1 noiembrie 2016, 06:08:01 UTC+2, Reuben Brooks a scris:

 Context: I love julia, and I've never built any kind of webapp. Most of 
 my programming experience is in Mathematica and Julia...hacking things 
 together (poorly) in Python when nothing else works.

 Problem: I have a script  / notebook in julia that pulls data from 
 sources, analyzes it, builds fancy plots, and has lots of nice 
 information. 
 Now I want to build a basic webapp that will allow me to access this 
 information anywhere, anytime (will be updated regularly). 

 Question 1: is there a julia package that suits my needs well, or 
 should I look at using some other fronted to create the frontend? Elm 
 intrigues me, as much for the learning as for the actual solution. 

 Bottom line: I don't know enough about what I'm wading into to choose 
 wisely. What does the community suggest?

>>>

[julia-users] Re: Webapp Deployment

2016-11-01 Thread Tomas Mikoviny
I'm not sure Escher works with v0.5 if that's what Adrian runs


[julia-users] Re: Unusual amount of storage allocation

2016-11-01 Thread Douglas Bates
I moved this discussion to https://discourse.julialang.org where it is 
easier to format the code chunks.

On Monday, October 31, 2016 at 2:04:02 PM UTC-5, Douglas Bates wrote:
>
> I am encountering an unexpected amount of storage allocation in the 
> cfactor!{A::HBlkDiag) method in the MixedModels package.  See
> https://github.com/dmbates/MixedModels.jl/blob/master/src/cfactor.jl  for 
> the code.
>
> An HBlkDiag matrix is a homogeneous block diagonal matrix where 
> "homogeneous" refers to the fact that all the diagonal blocks are square 
> and of the same size.  Because of this homogeneity the blocks can be stored 
> in an r by r by k array where r is the size of each of the square block and 
> k is the number of such blocks.
>
> On entry to this method the blocks are symmetric, positive semi-definite. 
>  I want to overwrite the upper triangle of each of these blocks with its 
> Cholesky factor.  I call LAPACK.potrf! directly because I don't want 
> cholfact! to throw a non-positive-definite error.  The strange thing to me 
> is that when I monitor the storage allocation, I get a huge amount of 
> storage being allocated in the line with that call.  This may be because 
> LAPACK.potrf! returns a tuple of the original matrix and an Int (the info 
> code) but I'm not sure.
>
> To see an example of this unusual amount of allocation try the following 
> code with julia --track-allocation=user
>
> using Feather, MixedModels
> cd(Pkg.dir("MixedModels", "test", "data"))
> sleepstudy = Feather.read("sleepstudy.feather", nullable=false)
> fm1 = fit!(lmm(Reaction ~ 1 + Days + (1 + Days | Subject), sleepstudy))
> Profile.clear_malloc_data()
> devs, vars, betas, thetas = bootstrap(10_000, fm1)
>
> I get
>
> - function cfactor!{T}(A::HBlkDiag{T})
> - Aa = A.arr
> 0 r, s, t = size(Aa)
> 0 if r ≠ s
> 0 throw(ArgumentError("HBlkDiag matrix A must be square"))
> - end
>  94428000 scm = Array(T, (r, r))
> 0 for k in 1 : t  # FIXME: Lots of allocations in this loop
> 0 for j in 1 : r, i in 1 : j
> 0 scm[i, j] = Aa[i, j, k]
> - end
> 566568000 LAPACK.potrf!('U', scm)
> 0 for j in 1 : r, i in 1 : j
> 0 Aa[i, j, k] = scm[i, j]
> - end
> - end
>  10492000 UpperTriangular(A)
> - end
>
> In this case the HBlkDiag matrix being decomposed has is 36 by 36 
> consisting of 18 2 by 2 diagonal blocks.  scm is a scratch 2 by 2 
> matrixthat is overwritten in sequence by the upper triangle of each of the 
> original 2 by 2 blocks and passed to LAPACK.potrf!
>


[julia-users] [Announcement] Moving to Discourse (Statement of Intent)

2016-11-01 Thread Valentin Churavy


The Julia community has been growing rapidly over the last few years and 
discussions are happening at many different places: there are several 
Google Groups (julia-users, julia-dev, ...), IRC, Gitter, and a few other 
places. Sometimes packages or organisations also have their own forums and 
chat rooms.


In the past, Discourse has been brought up as an alternative platform that 
we could use instead of Google Groups and that would allow us to invite the 
entire Julia community into one space. 

Changing something established is tricky and so we decided to move slowly 
on this.

Right now, we are only moving julia-dev to Discourse (see the corresponding 
post on julia-dev for a timeline).


We would like to solicit feedback from the broader Julia community about 
moving julia-users to Discourse as well, and potentially other mailing 
lists like julia-stats.
If you are interested in trying it out, please visit 
http://discourse.julialang.org.


Discourse is organised by categories so *Development* is the new home for 
julia-dev and *General* would be the future home of julia-users.

If we are happy with Discourse as a solution, we will move forward with the 
migration. The timetable would be roughly as follows:



   - About 4 weeks after the move of julia-dev, decision to be made if we 
   move julia-users
   - Announcement on the mailing list, one week before the move.
   - Setting julia-users into read-only mode.
   - Final announcement.

If you have feedback or comments, please post them at 
http://discourse.julialang.org/t/migration-of-google-groups-to-discourse or 
in this thread.


[julia-users] Re: Stumped by a subtyping issue

2016-11-01 Thread Eric Davies
Thank you, that clears things up. And thanks for adding to the manual!

On Monday, 31 October 2016 21:16:27 UTC-5, vav...@uwaterloo.ca wrote:
>
> Eric,
>
> Possibly the following paragraph from the Julia manual may be relevant.  
> This paragraph is an excerpt from the section on parametric type aliases in 
> the chapter "Types."  I am quite familiar with this paragraph because I 
> authored it in a PR it after I was burned by a similar issue!
>
> -- Steve Vavasis
>
>
> "This declaration of Vector creates a subtype relation Vector{Int} <: 
> Vector. However, it is not always the case that a parametric typealias 
> statement creates such a relation; for example, the statement:
>
> typealias AA{T} Array{Array{T,1},1}
>  
>
> does not create the relation AA{Int} <: AA. The reason is that 
> Array{Array{T,1},1} is not an abstract type at all; in fact, it is a 
> concrete type describing a 1-dimensional array in which each entry is an 
> object of type Array{T,1} for some value of T."
>
>
>
>
>
>
> On Monday, October 31, 2016 at 7:43:58 PM UTC-4, Eric Davies wrote:
>>
>> I am getting confusing behaviour with some complex type aliases while 
>> using Cxx.jl and I was hoping someone could point out what is going on.
>>
>> These are the aliases:
>>
>> typealias CppAWSErrorType{C, I<:Integer} CppTemplate{CppBaseType{Symbol(
>> "Aws::Client::AWSError")}, Tuple{CppEnum{C, I}}}
>> typealias CppAWSError{C, I<:Integer, Q} CppRef{CppAWSErrorType{C, I}, Q}
>>
>> ...
>> aws_raw_error = @cxx list_buckets_outcome->GetError()
>> thetype = AWSCxx.CppAWSError{Symbol("Aws::S3::S3Errors"), Int32, (false, 
>> false, false)}
>>
>> @test typeof(aws_raw_error) <: thetype  # success
>> @test typeof(aws_raw_error) == thetype  # success
>> @test isa(aws_raw_error, thetype)  # success
>> @test typeof(aws_raw_error) <: AWSCxx.CppAWSError  # failure
>> @test isa(aws_raw_error, AWSCxx.CppAWSError)  # failure
>>
>> Can anyone help?
>>
>

Re: [julia-users] inconsistent 'unique' in Atom

2016-11-01 Thread Carolina Brum
Thanks guys, it is misleading tho and not ergonomic. Thanks

On Nov 1, 2016 1:25 AM, "Chris Rackauckas"  wrote:

> Just click on the number and it will expand it.
>
> On Sunday, October 30, 2016 at 7:28:47 PM UTC-7, missp...@gmail.com wrote:
>>
>> Hi Yichao,
>>
>> thanks a lot,
>> it does display it correctly if I use dump, but it's annoying that Atom
>> is inconsistent while displaying the results
>>
>> thanks a lot,
>>
>> On Sunday, October 30, 2016 at 7:14:07 PM UTC-7, Yichao Yu wrote:
>>>
>>> On Sun, Oct 30, 2016 at 10:05 PM,   wrote:
>>> > Hi folks,
>>> >
>>> > I've noticed that in v5 the expression
>>> >
>>> >
>>> > unique([122 122.5 10 10.3])
>>> >
>>> >
>>> > gives as result the following vector:
>>> >
>>> > 122 123 10 10.3
>>> >
>>> >
>>> > Any device? Is there any maximum number of characters displayed in the
>>> > console, or something similar?
>>>
>>> I'm not sure how Atom display works but maybe you can try
>>> `dump(unique([122 122.5 10 10.3]))`. Also what if you just print `[122
>>> 122.5 10 10.3]` since the unique is supposed to be no op here?
>>>
>>> >
>>> > thanks,
>>>
>>


[julia-users] Re: Matering Julia, an updated & extended Russian Translation issued

2016-11-01 Thread Андрей Логунов
Source codes URL^ http://dl.dmkpress.com/978-5-97060-370-3.zip

вторник, 1 ноября 2016 г., 22:22:54 UTC+10 пользователь Андрей Логунов 
написал:
>
> Matering Julia, an updated & extended Russian Translation issued (incl. 
> version 0.4.6) in September.
> Source codes availlable in 
> https://dmkpress.com/catalog/computer/programming/978-5-97060-370-3/
>


[julia-users] Matering Julia, an updated & extended Russian Translation issued

2016-11-01 Thread Андрей Логунов
Matering Julia, an updated & extended Russian Translation issued (incl. 
version 0.4.6) in September.
Source codes availlable in 
https://dmkpress.com/catalog/computer/programming/978-5-97060-370-3/


[julia-users] Re: Simulink alternative, based on Julia?

2016-11-01 Thread J Luis
This example 
from
 
the nuklear lib looks like this lib (C, 
header only) could be used for that.

terça-feira, 1 de Novembro de 2016 às 08:51:03 UTC, Uwe Fechner escreveu:
>
> Hello,
>
> from my point of view, Julia is already a viable alternative to Matlab for 
> flight
> and energy system simulations.
>
> What still is missing is a block editor for dynamic systems like Simulink.
>
> The best, open source alternative to Simulink that I know of is XCos, 
> which is part of Scilab.
> XCos is a block diagram editor and GUI for the hybrid simulator
>
> I think, that the XCos GUI is implemented in Java.
> See: https://www.scilab.org/scilab/features/xcos
>
> What would be a reasonable approach for implementing a block diagram
> editor for Julia? Using QML.jl? Or could such an editor be implemented
> with web technologies, based on Atom?
>
> Any ideas welcome.
>
> Uwe Fechner
>


[julia-users] async read from device?

2016-11-01 Thread Simon Byrne
I'm trying to read from an input device asynchronously. I tried the obvious

@async begin
dev = open(STICK_INPUT_DEV)
while true
s = read(dev, Stick)
if s.ev_type == EV_KEY
println(s)
end
end
end

But this doesn't seem to yield correctly. The full code is available here:
https://gist.github.com/simonbyrne/70f8c944ed7a76c95b1c90a964e9d7d1

I did come across this related discussion for file IO which didn't really 
resolve the issue:
https://groups.google.com/d/topic/julia-users/kfu_hgM3bnI/discussion

What's the best way to do this?

Simon


[julia-users] Re: 0.5 new generators syntax question

2016-11-01 Thread Johan Sigfrids
Given that allocating an array for 50 Ints, filling it up, and then summing 
it all together probably takes less than a microsecond, any difference 
between allocating and not allocating will disappears in the noise. As Yichao 
Yu mentions, what you end up measuring is the time it takes to setup the 
computation. This is a case where using the BenchmarkTools can be really 
helpful. 

julia> using BenchmarkTools

julia> @benchmark sum([2*t for t in 1:2:100])
BenchmarkTools.Trial: 
  samples:  1
  evals/sample: 907
  time tolerance:   5.00%
  memory tolerance: 1.00%
  memory estimate:  560.00 bytes
  allocs estimate:  2
  minimum time: 121.00 ns (0.00% GC)
  median time:  131.00 ns (0.00% GC)
  mean time:162.82 ns (10.91% GC)
  maximum time: 2.45 μs (0.00% GC)

julia> @benchmark sum(2*t for t in 1:2:100)
BenchmarkTools.Trial: 
  samples:  1
  evals/sample: 962
  time tolerance:   5.00%
  memory tolerance: 1.00%
  memory estimate:  80.00 bytes
  allocs estimate:  3
  minimum time: 86.00 ns (0.00% GC)
  median time:  90.00 ns (0.00% GC)
  mean time:107.20 ns (6.99% GC)
  maximum time: 3.64 μs (95.05% GC)


On Monday, October 31, 2016 at 9:42:14 PM UTC+2, Jesse Jaanila wrote:
>
> Hi,
>
> I was experimenting with the new 0.5 features and they are great! But to 
> my surprise,
> the generator syntax doesn't work as I'm expecting. Let's say I want to 
> calculate
> some summation. With the old syntax I could do
>
> @time sum([2*t for t in 1:2:100])
>   0.015104 seconds (13.80 k allocations: 660.366 KB)
>
> that allocates the array within the summation. Now I thought this would a 
> prime example
> where the memory overhead could be decreased by using the new notation i.e.
>
> @time sum(2*t for t in 1:2:100)
>   0.019215 seconds (18.98 k allocations: 785.777 KB)
>
> ,but generator syntax performs slightly worse. Also if we want find the 
> maximum we would do
>
> julia> @time maximum([2*t for t in 1:2:100])
>   0.015182 seconds (12.90 k allocations: 606.166 KB)
> 198
>
> julia> @time maximum(2*t for t in 1:2:100)
>   0.019935 seconds (18.74 k allocations: 772.180 KB)
> 198
>
> Have I understood the new generator syntax incorrectly or should the new 
> syntax perform
> better in these code snippet examples?
>
>
>
>
>
>
>

Re: [julia-users] Re: Cost of @view and reshape

2016-11-01 Thread Mauro
Cool!

On Tue, 2016-11-01 at 10:09, Alexey Cherkaev  wrote:
> The package is available at https://github.com/mobius-eng/RadauBVP.jl
>
> I haven't put it into METADATA yet. I would like to improve documentation
> and add some tests before doing this.
>
> However, it is already usable, mostly optimised (except for sparsity, it is
> coming) and I believe it is the only available free ODE BVP solver for
> Julia right now (the only other alternative I am aware of is `bvpsol` from
> ODEInterface.jl, but it is not free and from limited amount of tests I've
> done, RadauBVP is faster).

Concerning BVP, what about ApproxFun:
https://github.com/ApproxFun/ApproxFun.jl#solving-ordinary-differential-equations

Concerning sparse Jacobians: I once wrote a matrix coloring package:
https://github.com/mauro3/MatrixColorings.jl It needs some love, but if
you think that it would be useful for you, ping me and I'll try to
update it.


> On Sunday, October 30, 2016 at 8:40:07 PM UTC+2, Chris Rackauckas wrote:
>>
>> reshape makes a view, and views are cheap. Don't worry about this.
>>
>> BTW, I would love to add a collocation method to JuliaDiffEq. Would you
>> consider making this a package?
>>
>> On Sunday, October 30, 2016 at 3:52:37 AM UTC-7, Alexey Cherkaev wrote:
>>>
>>> I'm writing RadauIIA (for now, fixed order 5 with 3 points) method for
>>> ODE BVP (basically, collocation method). In the process, I construct an
>>> overall non-linear equation that needs to be solved. It takes "mega-vector"
>>> x[j] as an argument. However, internally it is more convenient to reshape
>>> it to y[m,i,n] where m is the index of original ODE vector, i is the index
>>> of the collocation point on time element (or layer) and n is time element
>>> index. Also, some inputs to the method (ODE RHS function and BVP function)
>>> expect z[m]-kind vector. So far I chose to pass a @view of the
>>> "mega-vector" to them.
>>>
>>> The alternatives for reshaping and @view would be:
>>>
>>>- Use the inline function or a macro that maps the indices between
>>>mega-vector and arrays (I've tried it, didn't see any difference in
>>>performance or memory allocation, but @code_warntype has less "red" 
>>> spots)
>>>- Copy relevant pieces of mega-vector into preallocated arrays of
>>>desired shape. This can also be an alternative for @view.
>>>
>>> Is there some kind of rule of thumb where which one would be preferable?
>>> And are there any high costs associated with @view and reshape?
>>>
>>>


[julia-users] Re: Cost of @view and reshape

2016-11-01 Thread Alexey Cherkaev
The package is available at https://github.com/mobius-eng/RadauBVP.jl

I haven't put it into METADATA yet. I would like to improve documentation 
and add some tests before doing this.

However, it is already usable, mostly optimised (except for sparsity, it is 
coming) and I believe it is the only available free ODE BVP solver for 
Julia right now (the only other alternative I am aware of is `bvpsol` from 
ODEInterface.jl, but it is not free and from limited amount of tests I've 
done, RadauBVP is faster).


On Sunday, October 30, 2016 at 8:40:07 PM UTC+2, Chris Rackauckas wrote:
>
> reshape makes a view, and views are cheap. Don't worry about this.
>
> BTW, I would love to add a collocation method to JuliaDiffEq. Would you 
> consider making this a package?
>
> On Sunday, October 30, 2016 at 3:52:37 AM UTC-7, Alexey Cherkaev wrote:
>>
>> I'm writing RadauIIA (for now, fixed order 5 with 3 points) method for 
>> ODE BVP (basically, collocation method). In the process, I construct an 
>> overall non-linear equation that needs to be solved. It takes "mega-vector" 
>> x[j] as an argument. However, internally it is more convenient to reshape 
>> it to y[m,i,n] where m is the index of original ODE vector, i is the index 
>> of the collocation point on time element (or layer) and n is time element 
>> index. Also, some inputs to the method (ODE RHS function and BVP function) 
>> expect z[m]-kind vector. So far I chose to pass a @view of the 
>> "mega-vector" to them.
>>
>> The alternatives for reshaping and @view would be:
>>
>>- Use the inline function or a macro that maps the indices between 
>>mega-vector and arrays (I've tried it, didn't see any difference in 
>>performance or memory allocation, but @code_warntype has less "red" spots)
>>- Copy relevant pieces of mega-vector into preallocated arrays of 
>>desired shape. This can also be an alternative for @view.
>>
>> Is there some kind of rule of thumb where which one would be preferable? 
>> And are there any high costs associated with @view and reshape?
>>
>>

[julia-users] Simulink alternative, based on Julia?

2016-11-01 Thread Uwe Fechner
Hello,

from my point of view, Julia is already a viable alternative to Matlab for 
flight
and energy system simulations.

What still is missing is a block editor for dynamic systems like Simulink.

The best, open source alternative to Simulink that I know of is XCos, 
which is part of Scilab.
XCos is a block diagram editor and GUI for the hybrid simulator

I think, that the XCos GUI is implemented in Java.
See: https://www.scilab.org/scilab/features/xcos

What would be a reasonable approach for implementing a block diagram
editor for Julia? Using QML.jl? Or could such an editor be implemented
with web technologies, based on Atom?

Any ideas welcome.

Uwe Fechner


[julia-users] Re: Webapp Deployment

2016-11-01 Thread Adrian Salceanu
My experience with Escher is limited to reading the docs and looking at the 
sources, but it seems to be related to loading the file (or it's content): 

function loadfile(filename)
if isfile(filename)
try
ui = include(filename)
if isa(ui, Function)
return ui
else
warn("$filename did not return a function")
return (w) -> Elem(:p, string(
filename, " did not return a UI function"
))
end
catch err
bt = backtrace()
return (win) -> Elem(:pre, sprint() do io
showerror(io, err)
Base.show_backtrace(io, bt)
end)
end
else
return (w) -> Elem(:p, string(
filename, " could not be found."
))
end
end
in https://github.com/shashi/Escher.jl/blob/master/src/cli/serve.jl

So maybe make sure the example files are accessible (readable)? 

You can use the usual communication paths: a new issue in GitHub or 
StackOverflow. Also, check if there's a Gitter channel for Escher. 


marți, 1 noiembrie 2016, 09:38:11 UTC+2, Reuben Brooks a scris:
>
> When I try to run the examples or basic hello.jl file in Escher, always 
> get this in browser: ".../Escher/examples/hello.jl did not return a UI 
> function"
>
> I don't see any issues filed on github with this, suspect it's something 
> on my end. What would be the appropriate channel for me to get some help on 
> this?
>
> On Tuesday, November 1, 2016 at 1:10:18 AM UTC-5, Adrian Salceanu wrote:
>>
>> Sounds like the answer is https://github.com/shashi/Escher.jl 
>>
>> It was built exactly for your use case and it's actually inspired by Elm 
>>
>>
>>
>> marți, 1 noiembrie 2016, 06:08:01 UTC+2, Reuben Brooks a scris:
>>>
>>> Context: I love julia, and I've never built any kind of webapp. Most of 
>>> my programming experience is in Mathematica and Julia...hacking things 
>>> together (poorly) in Python when nothing else works.
>>>
>>> Problem: I have a script  / notebook in julia that pulls data from 
>>> sources, analyzes it, builds fancy plots, and has lots of nice information. 
>>> Now I want to build a basic webapp that will allow me to access this 
>>> information anywhere, anytime (will be updated regularly). 
>>>
>>> Question 1: is there a julia package that suits my needs well, or should 
>>> I look at using some other fronted to create the frontend? Elm intrigues 
>>> me, as much for the learning as for the actual solution. 
>>>
>>> Bottom line: I don't know enough about what I'm wading into to choose 
>>> wisely. What does the community suggest?
>>>
>>

[julia-users] Re: Webapp Deployment

2016-11-01 Thread Reuben Brooks
When I try to run the examples or basic hello.jl file in Escher, always get 
this in browser: ".../Escher/examples/hello.jl did not return a UI function"

I don't see any issues filed on github with this, suspect it's something on 
my end. What would be the appropriate channel for me to get some help on 
this?

On Tuesday, November 1, 2016 at 1:10:18 AM UTC-5, Adrian Salceanu wrote:
>
> Sounds like the answer is https://github.com/shashi/Escher.jl 
>
> It was built exactly for your use case and it's actually inspired by Elm 
>
>
>
> marți, 1 noiembrie 2016, 06:08:01 UTC+2, Reuben Brooks a scris:
>>
>> Context: I love julia, and I've never built any kind of webapp. Most of 
>> my programming experience is in Mathematica and Julia...hacking things 
>> together (poorly) in Python when nothing else works.
>>
>> Problem: I have a script  / notebook in julia that pulls data from 
>> sources, analyzes it, builds fancy plots, and has lots of nice information. 
>> Now I want to build a basic webapp that will allow me to access this 
>> information anywhere, anytime (will be updated regularly). 
>>
>> Question 1: is there a julia package that suits my needs well, or should 
>> I look at using some other fronted to create the frontend? Elm intrigues 
>> me, as much for the learning as for the actual solution. 
>>
>> Bottom line: I don't know enough about what I'm wading into to choose 
>> wisely. What does the community suggest?
>>
>

[julia-users] Re: Webapp Deployment

2016-11-01 Thread Adrian Salceanu
Sounds like the answer is https://github.com/shashi/Escher.jl 

It was built exactly for your use case and it's actually inspired by Elm 



marți, 1 noiembrie 2016, 06:08:01 UTC+2, Reuben Brooks a scris:
>
> Context: I love julia, and I've never built any kind of webapp. Most of my 
> programming experience is in Mathematica and Julia...hacking things 
> together (poorly) in Python when nothing else works.
>
> Problem: I have a script  / notebook in julia that pulls data from 
> sources, analyzes it, builds fancy plots, and has lots of nice information. 
> Now I want to build a basic webapp that will allow me to access this 
> information anywhere, anytime (will be updated regularly). 
>
> Question 1: is there a julia package that suits my needs well, or should I 
> look at using some other fronted to create the frontend? Elm intrigues me, 
> as much for the learning as for the actual solution. 
>
> Bottom line: I don't know enough about what I'm wading into to choose 
> wisely. What does the community suggest?
>