[julia-users] Re: @printf with comma for thousands separator (also ANN: NumFormat.jl)

2014-11-21 Thread Tony Fong
I just wrote a package that may do what you need in runtime.

https://github.com/tonyhffong/NumFormat.jl

It's not in METADATA yet, since I'm not sure if its implementation is 
kosher. (There's a trick of generating new generic functions at runtime 
within the module name space).

Speed is decent (within 30% of standard macro). You can run the test script 
to see the difference on your machine.

After you clone it, you can try

using NumFormat

format( 12345678, commas=true) # method 1. slowest, but easiest to change 
format in a readable way

sprintf1( %'d, 12345678 ) # method 2. closest to @sprintf in form, so one 
can switch over quickly

f = generate_formatter( %'d ) # method 3. fastest if f is used repeatedly
f( 12345678 )

Tony


On Saturday, November 8, 2014 6:05:08 PM UTC+7, Arch Call wrote:

 How would I use a @printf macro to use commas for thousands separators in 
 integers?

 @printf %d \n 12345678

 This outputs:  12345678

 I would like the output to be:   12,345,678

 Thanks...Archie



[julia-users] constructing ArrayViews with indexers that are representable as a range

2014-11-21 Thread Ján Dolinský
Hello,

I am trying to create an ArrayView with column indexer like [2,7,8,10] or 
even a repeating example like [2,7,7,10].  E.g.

X = rand(10,10)

view(X, :, [2,7,8,10])

Is this possible at the moment ? The documentation of ArrayViews says that 
four types of indexers are supported: integer, range (*e.g.* a:b), stepped 
range (*e.g.* a:b:c), and colon (*i.e.*, :).

Thanks,
Jan


[julia-users] constructing ArrayViews with indexers that are not representable as a range

2014-11-21 Thread Ján Dolinský
Hello,

I am trying to create an ArrayView with column indexer like [2,7,8,10] or 
even a repeating example like [2,7,7,10].  E.g.

X = rand(10,10)

view(X, :, [2,7,8,10])

Is this possible at the moment ? The documentation of ArrayViews says that 
four types of indexers are supported: integer, range (*e.g.* a:b), stepped 
range (*e.g.* a:b:c), and colon (*i.e.*, :).

Thanks,
Jan
 

Re: [julia-users] Create formatted string

2014-11-21 Thread Tony Fong
For formatting just one number into a string (instead of multi-arg sprintf) 
you can use sprintf1 in https://github.com/tonyhffong/NumFormat.jl. It's 
somewhat close to standard macro speed.


On Friday, April 18, 2014 12:58:08 AM UTC+7, Dominique Orban wrote:

 Here are some timings comparing @printf with the proposed @eval option. I 
 also wanted to try a variant that calls libc's printf directly. I came up 
 with this implementation: https://gist.github.com/11000433. Its 
 advantage is that you can print an array's address using %p (for what 
 it's worth).

 I didn't find a way around another @eval due to what ccall expects its 
 arguments to look like. I thought it would be easy to call libc's printf, 
 but it wasn't! (Also I'm sure cprintf should be more sophisticated than it 
 currently is, but for now, it does what I need.)

 Running time_printf.jl on my Macbook pro gives the following timings:

macro  eval  libc
 8.06e-05  5.91e-02  6.63e-03

 These are averages over 1000 calls. The call to libc's printf isn't doing 
 too badly compared to the simpler @eval proposed by Stefan. But I'm 
 wondering if it's possible to avoid the @eval in cprintf and call the C 
 function directly?!

 Are there other options?

 I'm all for performance but when it comes to printing, convenience and 
 flexibility are also a must in my opinion. Because printing is inherently 
 inefficient, I'm willing to accept performance hits there.

 Many thanks for all the help.

 ps: Shoudln't @time return the execution time?


 On Thursday, April 17, 2014 9:17:59 AM UTC-7, John Myles White wrote:

 I think the question is how to time a proposed sprintf() function vs. the 
 existing @sprintf macro.

  -- John

 On Apr 17, 2014, at 7:27 AM, Stefan Karpinski ste...@karpinski.org 
 wrote:

 I'm not sure what you mean, but doing things in a loop and timing it is 
 the normal way. The lack of usefulness of my answer may be indicative that 
 I don't understand the question.


 On Wed, Apr 16, 2014 at 11:13 PM, Dominique Orban dominiq...@gmail.com 
 wrote:

 How would one go about benchmarking a set of implementations like those?


 On Sunday, April 13, 2014 3:22:58 PM UTC-7, Stefan Karpinski wrote:

 Please don't do this – or if you do and your program is amazingly slow, 
 then consider yourself warned. You can define a custom formatting function 
 pretty easily:

 julia fmt = %8.1e
 %8.1e

 julia @eval dofmt(x) = @sprintf($fmt, x)
 dofmt (generic function with 1 method)

 julia dofmt(1)
  1.0e+00

 julia dofmt(123.456)
  1.2e+02


 The difference is that you compile the function definition with eval 
 *once* and then call it many times, rather than calling eval every time 
 you 
 want to print something.
  

 On Sun, Apr 13, 2014 at 6:17 PM, Mike Innes mike.j...@gmail.com 
 wrote:

 It occurs to me that, if you really need this, you can define

 sprintf(args...) = eval(:@sprintf($(args...)))

 It's not pretty or ideal in terms of performance, but it will do the 
 job.

 fmt = %8.1e
 sprintf(fmt, 3.141) #=  3.1e+00

 On Sunday, 13 April 2014 22:47:12 UTC+1, Dominique Orban wrote:

 So what's the preferred Julia syntax to achieve what I meant here:

 julia fmt = %8.1e;
 julia @sprintf(fmt, 3.1415)
 ERROR: first or second argument must be a format string



 On Sunday, April 13, 2014 1:31:57 PM UTC-7, John Myles White wrote:

 As far as the macro is concerned, the splat isn’t executed: it’s 
 just additional syntax that gets taken in as a whole expression. 

 The contrast between how a function with splatting works and how a 
 macro with splatting works might be helpful: 

 julia function splat(a, b...) 
println(a) 
println(b) 
return 
end 
 splat (generic function with 2 methods) 

 julia splat(1, 2, 3) 
 1 
 (2,3) 

 julia splat(1, [2, 3]...) 
 1 
 (2,3) 

 julia macro splat(a, b...) 
   println(a) 
   println(b) 
   :() 
   end 

 julia @splat(1, 2, 3) 
 1 
 (2,3) 
 () 

 julia @splat(1, [2, 3]...) 
 1 
 (:([2,3]...),) 
 () 


  — John 

 On Apr 13, 2014, at 1:20 PM, Jeff Waller trut...@gmail.com wrote: 

  Likewise I am having problems with @sprintf 
  
  Is this because @sprinf is macro?  The shorthand of expanding a 
 printf with format the contents of an array is desirable.  I would have 
 expected the ... operator to take an array of length 2 and turn it into 
 2 
 arguments. 
  
  julia X=[1 2] 
 1x2 Array{Int64,2}: 
  1  2 
  
  julia @sprintf(%d%d,1,2) 
  12 
  
  julia @sprintf(%d%d,X...) 
  ERROR: @sprintf: wrong number of arguments 
  
  julia @sprintf(%d%d,(1,2)...) 
  ERROR: @sprintf: wrong number of arguments 
  
  julia @sprintf(%d,X...) 
  ERROR: error compiling anonymous: unsupported or misplaced 
 expression ... in function anonymous 
  in sprint at io.jl:460 
  in sprint at io.jl:464 
  
  julia macroexpand(quote @sprintf(%d%d,X...) end) 
  :($(Expr(:error, 

[julia-users] InPlace function writting into a subarray

2014-11-21 Thread Ján Dolinský
Hello,

I wanted an inplace function to write a result into a subarray as follows:

X = zeros(10,5)
fill!(X[:,3], 0.1)

The X[:,3] is however not updated.

X[:,3] = fill(0.1, 10) does the update as expected.

Is this a desired behaviour ?


---

Alternatively, I can do

r = view(X,:,3)
fill!(r, 0.1)

This results in an updated column of X

I wonder which is likely to be more efficient if used in a loop:

X[:,3] = fill(0.1, 10)

or 
r = view(X,:,3)
fill!(r, 0.1)


Thanks,
Jan


Re: [julia-users] InPlace function writting into a subarray

2014-11-21 Thread John Myles White
X[:, 3] doesn't produce a SubArray. It produces a brand new array.

-- John

On Nov 21, 2014, at 10:30 AM, Ján Dolinský jan.dolin...@2bridgz.com wrote:

 Hello,
 
 I wanted an inplace function to write a result into a subarray as follows:
 
 X = zeros(10,5)
 fill!(X[:,3], 0.1)
 
 The X[:,3] is however not updated.
 
 X[:,3] = fill(0.1, 10) does the update as expected.
 
 Is this a desired behaviour ?
 
 
 ---
 
 Alternatively, I can do
 
 r = view(X,:,3)
 fill!(r, 0.1)
 
 This results in an updated column of X
 
 I wonder which is likely to be more efficient if used in a loop:
 
 X[:,3] = fill(0.1, 10)
 
 or 
 r = view(X,:,3)
 fill!(r, 0.1)
 
 
 Thanks,
 Jan



Re: [julia-users] [ANN, x-post julia-stats] Mocha.jl Deep Learning Library for Julia

2014-11-21 Thread René Donner
Hi,

as I am just in the process of wrapping caffe, this looks really exiciting! I 
will definitely try this out in the coming days.

Are there any specific areas where you would like testing / feedback for now?

Do you have an approximate feeling how the performance compares to caffe?

Cheers,

Rene



Am 21.11.2014 um 03:00 schrieb Chiyuan Zhang plus...@gmail.com:

 https://github.com/pluskid/Mocha.jl
 Mocha is a Deep Learning framework for Julia, inspired by the C++ Deep 
 Learning framework Caffe. Since this is the first time I post announcement 
 here, change logs of the last two releases are listed:
 
 v0.0.2 2014.11.20
 
   • Infrastructure
   • Ability to import caffe trained model
   • Properly release all the allocated resources upon backend 
 shutdown
   • Network
   • Sigmoid activation function
   • Power, Split, Element-wise layers
   • Local Response Normalization layer
   • Channel Pooling layer
   • Dropout Layer
   • Documentation
   • Complete MNIST demo
   • Complete CIFAR-10 demo
   • Major part of User's Guide
 v0.0.1 2014.11.13
 
   • Backend
   • Pure Julia CPU
   • Julia + C++ Extension CPU
   • CUDA + cuDNN GPU
   • Infrastructure
   • Evaluate on validation set during training
   • Automaticly saving and recovering from snapshots
   • Network
   • Convolution layer, mean and max pooling layer, fully 
 connected layer, softmax loss layer
   • ReLU activation function
   • L2 Regularization
   • Solver
   • SGD with momentum
   • Documentation
   • Demo code of LeNet on MNIST
   • Tutorial document on the MNIST demo (half finished)
 
 
 Below is a copy of the README file:
 
 
 Mocha is a Deep Learning framework for Julia, inspired by the C++ Deep 
 Learning framework Caffe. Mocha support multiple backends:
 
   • Pure Julia CPU Backend: Implemented in pure Julia; Runs out of the 
 box without any external dependency; Reasonably fast on small models thanks 
 to Julia's LLVM-based just-in-time (JIT) compiler and Performance Annotations 
 that eliminate unnecessary bound checkings.
   • CPU Backend with Native Extension: Some bottleneck computations 
 (Convolution and Pooling) have C++ implementations. When compiled and 
 enabled, could be faster than the pure Julia backend.
   • CUDA + cuDNN: An interface to NVidia® cuDNN GPU accelerated deep 
 learning library. When run with CUDA GPU devices, could be much faster 
 depending on the size of the problem (e.g. on MNIST CUDA backend is roughly 
 20 times faster than the pure Julia backend).
 Installation
 
 To install the release version, simply run
 
 Pkg.add(Mocha)
 
 in Julia console. To install the latest development version, run the 
 following command instead:
 
 Pkg.clone(https://github.com/pluskid/Mocha.jl.git
 )
 
 Then you can run the built-in unit tests with
 
 Pkg.test(Mocha)
 
 to verify that everything is functioning properly on your machine.
 
 Hello World
 
 Please refer to the MNIST tutorial on how prepare the MNIST dataset for the 
 following example. The complete code for this example is located at 
 examples/mnist/mnist.jl. See below for detailed documentation of other 
 tutorials and user's guide.
 
 using
  Mocha
 
 data  
 = HDF5DataLayer(name=train-data,source=train-data-list.txt,batch_size=64
 )
 conv  
 = 
 ConvolutionLayer(name=conv1,n_filter=20,kernel=(5,5),bottoms=[:data],tops=[:conv
 ])
 pool  
 = 
 PoolingLayer(name=pool1,kernel=(2,2),stride=(2,2),bottoms=[:conv],tops=[:pool
 ])
 conv2 
 = 
 ConvolutionLayer(name=conv2,n_filter=50,kernel=(5,5),bottoms=[:pool],tops=[:conv2
 ])
 pool2 
 = 
 PoolingLayer(name=pool2,kernel=(2,2),stride=(2,2),bottoms=[:conv2],tops=[:pool2]
 )
 fc1   
 = 
 InnerProductLayer(name=ip1,output_dim=500,neuron=Neurons.ReLU(),bottoms=[:pool2
 ],
   tops
 =[:ip1
 ])
 fc2   
 = InnerProductLayer(name=ip2,output_dim=10,bottoms=[:ip1],tops=[:ip2
 ])
 loss  
 = SoftmaxLossLayer(name=loss,bottoms=[:ip2,:label
 ])
 
 sys 
 = System(CuDNNBackend
 ())
 
 init
 (sys)
 
 common_layers 
 =
  [conv, pool, conv2, pool2, fc1, fc2]
 net 
 = Net(MNIST-train, sys, [data, common_layers...
 , loss])
 
 params 
 = SolverParameters(max_iter=1, regu_coef=0.0005, momentum=0.9
 ,
 lr_policy
 =LRPolicy.Inv(0.01, 0.0001, 0.75
 ))
 solver 
 = SGD
 (params)
 
 
 # report training progress every 100 iterations
 add_coffee_break(solver, TrainingSummary(), every_n_iter=100
 )
 
 
 # save snapshots every 5000 iterations
 add_coffee_break(solver, Snapshot(snapshots, auto_load=true
 ),
 every_n_iter
 =5000
 )
 
 
 # show performance on test data every 1000 iterations
 
 data_test 
 = 

Re: [julia-users] InPlace function writting into a subarray

2014-11-21 Thread Ján Dolinský
Thanks. Indeed, and that is why fill!(X[:,3], 0.1) does not update X. It 
updates the brand new array which is a copy of 3rd column of X. 
Intuitively, however, I would expect fill!(X[:,3], 0.1) to update the 3rd 
column of X.

Thanks for the clarification,
Jan



[julia-users] Re: @printf with comma for thousands separator

2014-11-21 Thread Arch Call
Thanks Tony for getting back.

I wrote a Julia function that: thousands separates, truncates on specified 
digits to right of decimal, and pads spaces for vertical alignment.

I'll reply to your post this weekend my source code.  Maybe you can lift 
various aspects of my function into your code.

Best regards...Archie

On Saturday, November 8, 2014 6:05:08 AM UTC-5, Arch Call wrote:

 How would I use a @printf macro to use commas for thousands separators in 
 integers?

 @printf %d \n 12345678

 This outputs:  12345678

 I would like the output to be:   12,345,678

 Thanks...Archie



Re: [julia-users] InPlace function writting into a subarray

2014-11-21 Thread Milan Bouchet-Valat
Le vendredi 21 novembre 2014 à 05:06 -0800, Ján Dolinský a écrit :
 Thanks. Indeed, and that is why fill!(X[:,3], 0.1) does not update X.
 It updates the brand new array which is a copy of 3rd column of X.
 Intuitively, however, I would expect fill!(X[:,3], 0.1) to update the
 3rd column of X.
Right, this will probably change soon in 0.4 when array views are made
the default.


Regards



Re: [julia-users] Re: @printf with comma for thousands separator (also ANN: NumFormat.jl)

2014-11-21 Thread Tim Holy
Looks really nice. There are several packages that have been dipping into the 
guts of Base.Grisu, this looks like a promising alternative.

--Tim

On Friday, November 21, 2014 12:48:35 AM Tony Fong wrote:
 I just wrote a package that may do what you need in runtime.
 
 https://github.com/tonyhffong/NumFormat.jl
 
 It's not in METADATA yet, since I'm not sure if its implementation is
 kosher. (There's a trick of generating new generic functions at runtime
 within the module name space).
 
 Speed is decent (within 30% of standard macro). You can run the test script
 to see the difference on your machine.
 
 After you clone it, you can try
 
 using NumFormat
 
 format( 12345678, commas=true) # method 1. slowest, but easiest to change
 format in a readable way
 
 sprintf1( %'d, 12345678 ) # method 2. closest to @sprintf in form, so one
 can switch over quickly
 
 f = generate_formatter( %'d ) # method 3. fastest if f is used repeatedly
 f( 12345678 )
 
 Tony
 
 On Saturday, November 8, 2014 6:05:08 PM UTC+7, Arch Call wrote:
  How would I use a @printf macro to use commas for thousands separators in
  integers?
  
  @printf %d \n 12345678
  
  This outputs:  12345678
  
  I would like the output to be:   12,345,678
  
  Thanks...Archie



Re: [julia-users] constructing ArrayViews with indexers that are not representable as a range

2014-11-21 Thread Tim Holy
As of yesterday you can do this on julia 0.4 (using `sub` or `slice`). I don't 
know of an alternative way to do it on 0.3.

--Tim

On Friday, November 21, 2014 01:40:50 AM Ján Dolinský wrote:
 Hello,
 
 I am trying to create an ArrayView with column indexer like [2,7,8,10] or
 even a repeating example like [2,7,7,10].  E.g.
 
 X = rand(10,10)
 
 view(X, :, [2,7,8,10])
 
 Is this possible at the moment ? The documentation of ArrayViews says that
 four types of indexers are supported: integer, range (*e.g.* a:b), stepped
 range (*e.g.* a:b:c), and colon (*i.e.*, :).
 
 Thanks,
 Jan



Re: [julia-users] Re: @printf with comma for thousands separator (also ANN: NumFormat.jl)

2014-11-21 Thread Tony Fong
I'm only aware of https://github.com/lindahua/Formatting.jl so I'd welcome 
pointers about the other packages, so we can pool efforts and ideas. 

Please note that my package only tries to focus on formatting 1 number at a 
time. It's easy to concatenate strings in Julia, so I didn't worry about 
replicating the full varargs functionalities of sprintf.

On Friday, November 21, 2014 9:18:39 PM UTC+7, Tim Holy wrote:

 Looks really nice. There are several packages that have been dipping into 
 the 
 guts of Base.Grisu, this looks like a promising alternative. 

 --Tim 

 On Friday, November 21, 2014 12:48:35 AM Tony Fong wrote: 
  I just wrote a package that may do what you need in runtime. 
  
  https://github.com/tonyhffong/NumFormat.jl 
  
  It's not in METADATA yet, since I'm not sure if its implementation is 
  kosher. (There's a trick of generating new generic functions at runtime 
  within the module name space). 
  
  Speed is decent (within 30% of standard macro). You can run the test 
 script 
  to see the difference on your machine. 
  
  After you clone it, you can try 
  
  using NumFormat 
  
  format( 12345678, commas=true) # method 1. slowest, but easiest to 
 change 
  format in a readable way 
  
  sprintf1( %'d, 12345678 ) # method 2. closest to @sprintf in form, so 
 one 
  can switch over quickly 
  
  f = generate_formatter( %'d ) # method 3. fastest if f is used 
 repeatedly 
  f( 12345678 ) 
  
  Tony 
  
  On Saturday, November 8, 2014 6:05:08 PM UTC+7, Arch Call wrote: 
   How would I use a @printf macro to use commas for thousands separators 
 in 
   integers? 
   
   @printf %d \n 12345678 
   
   This outputs:  12345678 
   
   I would like the output to be:   12,345,678 
   
   Thanks...Archie 



[julia-users] Typeclass implementation

2014-11-21 Thread Sebastian Good
In implementing new kinds of numbers, I've found it difficult to know just 
how many functions I need to implement for the general library to just 
work on them. Take as an example a byte-swapped, e.g. big-endian, integer. 
This is handy when doing memory-mapped I/O on a file with data written in 
network order. It would be nice to just implement, say, Int32BigEndian and 
have it act like a real number. (Then I could just reinterpret a mmaped 
array and work directly off it) In general, we'd convert to Int32 at the 
earliest opportunity we had. For instance the following macro introduces a 
new type which claims to be derived from $base_type, and implements 
conversions and promotion rules to get it into a native form ($n_type) 
whenever it's used.

macro encoded_bitstype(name, base_type, bits_type, n_type, to_n, from_n)
quote
immutable $name : $base_type
bits::$bits_type
end

Base.bits(x::$name) = bits(x.bits)
Base.bswap(x::$name) = $name(bswap(x.bits))

Base.convert(::Type{$n_type}, x::$name) = $to_n(x.bits)
Base.convert(::Type{$name}, x::$n_type) = $name($from_n(x))
Base.promote_rule(::Type{$name}, ::Type{$n_type}) = $n_type
Base.promote_rule(::Type{$name}, ::Type{$base_type}) = $n_type
end
end

I can use it like this

@encoded_bitstype(Int32BigEndian, Signed, Int32, Int32, bswap, bswap)

But unfortunately, it doesn't work out of the box because the conversions 
need to be explicit. I noticed that many of the math functions promote 
their arguments to a common type, but the following trick doesn't work, 
presumably because the promotion algorithm doesn't ask to promote types 
that are already identical.

Base.promote_rule(::Type{$name}, ::Type{$name}) = $n_type

It seems like there are a couple of issues this raises, and I know I've 
seen similar questions on this list as people implement new kinds of 
things, e.g. exotic matrices.

1. One possibility would be to allow an implicit promotion, perhaps 
expressed as the self-promotion above. I say I'm a Int32BigEndian, or 
CompressedVector, or what have you, and provide a way to turn me into an 
Int32 or Vector implicitly to take advantage of all the functions already 
written on those types. I'm not sure this is a great option for the 
language since it's been explicitly avoided elsewhere. but I'm curious if 
there have been any discussions in this direction

2. If instead I want to say this new type acts like an Integer, there's 
no canonical place for me to find out what all the functions are I need to 
implement. Ultimately, these are like Haskell's typeclasses, Ord, Eq, etc. 
By trial and error, we can determine many of them and implement them this 
way

macro as_number(name, n_type)
 quote
global +(x::$name, y::$name) = +(convert($n_type, x), 
convert($n_type, y))
global *(x::$name, y::$name) = *(convert($n_type, x), 
convert($n_type, y))
global -(x::$name, y::$name) = -(convert($n_type, x), 
convert($n_type, y))
global -(x::$name) = -convert($n_type, x)
global /(x::$name, y::$name) = /(convert($n_type, x), 
convert($n_type, y))
global ^(x::$name, y::$name) = ^(convert($n_type, x), 
convert($n_type, y))
global ==(x::$name, y::$name) = (==)(convert($n_type, x), 
convert($n_type, y))
global  (x::$name, y::$name) = ( )(convert($n_type, x), 
convert($n_type, y)) 
Base.flipsign(x::$name, y::$name) = Base.flipsign(convert($n_type, 
x), convert($n_type, y))
end
end

But I don't know if I've found them all, and my guesses may well change as 
implementation details inside the base library change. Gradual typing is 
great, but with such a powerful base library already in place, it would be 
good to have a facility to know which functions are associated with which 
named behaviors.

Since we already have abstract classes in place, e.g. Signed, Number, etc., 
it would be natural to extract a list of functions which operate on them, 
or, even better, allow the type declarer to specify which functions 
*should* operate on that abstract class, typeclass or interface style?

Are there any recommendations in place, or updates to the language planned, 
to address these sorts of topics?







Re: [julia-users] Typeclass implementation

2014-11-21 Thread John Myles White
This sounds a bit like a mix of two problems:

(1) A lack of interfaces:

 - a) A lack of formal interfaces, which will hopefully be addressed by 
something like Traits.jl at some point. 
(https://github.com/JuliaLang/julia/issues/6975)

 - b) A lack of documentation for informal interfaces, such as the methods that 
AbstractArray objects must implement.

(2) A lack of delegation when you make wrapper types: 
https://github.com/JuliaLang/julia/pull/3292

The first has moved forward a bunch thanks to Mauro's work. The second has not 
gotten much further, although Kevin Squire wrote a different delegate macro 
that's noticeably better than the draft I wrote.

 -- John

On Nov 21, 2014, at 2:31 PM, Sebastian Good sebast...@palladiumconsulting.com 
wrote:

 In implementing new kinds of numbers, I've found it difficult to know just 
 how many functions I need to implement for the general library to just work 
 on them. Take as an example a byte-swapped, e.g. big-endian, integer. This is 
 handy when doing memory-mapped I/O on a file with data written in network 
 order. It would be nice to just implement, say, Int32BigEndian and have it 
 act like a real number. (Then I could just reinterpret a mmaped array and 
 work directly off it) In general, we'd convert to Int32 at the earliest 
 opportunity we had. For instance the following macro introduces a new type 
 which claims to be derived from $base_type, and implements conversions and 
 promotion rules to get it into a native form ($n_type) whenever it's used.
 
 macro encoded_bitstype(name, base_type, bits_type, n_type, to_n, from_n)
 quote
 immutable $name : $base_type
 bits::$bits_type
 end
 
 Base.bits(x::$name) = bits(x.bits)
 Base.bswap(x::$name) = $name(bswap(x.bits))
 
 Base.convert(::Type{$n_type}, x::$name) = $to_n(x.bits)
 Base.convert(::Type{$name}, x::$n_type) = $name($from_n(x))
 Base.promote_rule(::Type{$name}, ::Type{$n_type}) = $n_type
 Base.promote_rule(::Type{$name}, ::Type{$base_type}) = $n_type
 end
 end
 
 I can use it like this
 
 @encoded_bitstype(Int32BigEndian, Signed, Int32, Int32, bswap, bswap)
 
 But unfortunately, it doesn't work out of the box because the conversions 
 need to be explicit. I noticed that many of the math functions promote their 
 arguments to a common type, but the following trick doesn't work, presumably 
 because the promotion algorithm doesn't ask to promote types that are already 
 identical.
 
 Base.promote_rule(::Type{$name}, ::Type{$name}) = $n_type
 
 It seems like there are a couple of issues this raises, and I know I've seen 
 similar questions on this list as people implement new kinds of things, e.g. 
 exotic matrices.
 
 1. One possibility would be to allow an implicit promotion, perhaps expressed 
 as the self-promotion above. I say I'm a Int32BigEndian, or CompressedVector, 
 or what have you, and provide a way to turn me into an Int32 or Vector 
 implicitly to take advantage of all the functions already written on those 
 types. I'm not sure this is a great option for the language since it's been 
 explicitly avoided elsewhere. but I'm curious if there have been any 
 discussions in this direction
 
 2. If instead I want to say this new type acts like an Integer, there's no 
 canonical place for me to find out what all the functions are I need to 
 implement. Ultimately, these are like Haskell's typeclasses, Ord, Eq, etc. By 
 trial and error, we can determine many of them and implement them this way
 
 macro as_number(name, n_type)
  quote
 global +(x::$name, y::$name) = +(convert($n_type, x), 
 convert($n_type, y))
 global *(x::$name, y::$name) = *(convert($n_type, x), 
 convert($n_type, y))
 global -(x::$name, y::$name) = -(convert($n_type, x), 
 convert($n_type, y))
 global -(x::$name) = -convert($n_type, x)
 global /(x::$name, y::$name) = /(convert($n_type, x), 
 convert($n_type, y))
 global ^(x::$name, y::$name) = ^(convert($n_type, x), 
 convert($n_type, y))
 global ==(x::$name, y::$name) = (==)(convert($n_type, x), 
 convert($n_type, y))
 global  (x::$name, y::$name) = ( )(convert($n_type, x), 
 convert($n_type, y)) 
 Base.flipsign(x::$name, y::$name) = Base.flipsign(convert($n_type, 
 x), convert($n_type, y))
 end
 end
 
 But I don't know if I've found them all, and my guesses may well change as 
 implementation details inside the base library change. Gradual typing is 
 great, but with such a powerful base library already in place, it would be 
 good to have a facility to know which functions are associated with which 
 named behaviors.
 
 Since we already have abstract classes in place, e.g. Signed, Number, etc., 
 it would be natural to extract a list of functions which operate on them, or, 
 even better, allow the type declarer to specify which functions *should* 
 operate on that abstract class, typeclass or 

Re: [julia-users] Typeclass implementation

2014-11-21 Thread Sebastian Good
I will look into Traits.jl -- interesting package.

To get traction and some of the great power of comparability, the base
library will need to be carefully decomposed into traits, which (as noted
in some of the issue conversations on github) takes you straight to the
great research Haskell is doing in this area.

*Sebastian Good*


On Fri, Nov 21, 2014 at 9:38 AM, John Myles White johnmyleswh...@gmail.com
wrote:

 This sounds a bit like a mix of two problems:

 (1) A lack of interfaces:

  - a) A lack of formal interfaces, which will hopefully be addressed by
 something like Traits.jl at some point. (
 https://github.com/JuliaLang/julia/issues/6975)

  - b) A lack of documentation for informal interfaces, such as the methods
 that AbstractArray objects must implement.

 (2) A lack of delegation when you make wrapper types:
 https://github.com/JuliaLang/julia/pull/3292

 The first has moved forward a bunch thanks to Mauro's work. The second has
 not gotten much further, although Kevin Squire wrote a different delegate
 macro that's noticeably better than the draft I wrote.

  -- John

 On Nov 21, 2014, at 2:31 PM, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:

 In implementing new kinds of numbers, I've found it difficult to know just
 how many functions I need to implement for the general library to just
 work on them. Take as an example a byte-swapped, e.g. big-endian, integer.
 This is handy when doing memory-mapped I/O on a file with data written in
 network order. It would be nice to just implement, say, Int32BigEndian and
 have it act like a real number. (Then I could just reinterpret a mmaped
 array and work directly off it) In general, we'd convert to Int32 at the
 earliest opportunity we had. For instance the following macro introduces a
 new type which claims to be derived from $base_type, and implements
 conversions and promotion rules to get it into a native form ($n_type)
 whenever it's used.

 macro encoded_bitstype(name, base_type, bits_type, n_type, to_n, from_n)
 quote
 immutable $name : $base_type
 bits::$bits_type
 end

 Base.bits(x::$name) = bits(x.bits)
 Base.bswap(x::$name) = $name(bswap(x.bits))

 Base.convert(::Type{$n_type}, x::$name) = $to_n(x.bits)
 Base.convert(::Type{$name}, x::$n_type) = $name($from_n(x))
 Base.promote_rule(::Type{$name}, ::Type{$n_type}) = $n_type
 Base.promote_rule(::Type{$name}, ::Type{$base_type}) = $n_type
 end
 end

 I can use it like this

 @encoded_bitstype(Int32BigEndian, Signed, Int32, Int32, bswap, bswap)

 But unfortunately, it doesn't work out of the box because the conversions
 need to be explicit. I noticed that many of the math functions promote
 their arguments to a common type, but the following trick doesn't work,
 presumably because the promotion algorithm doesn't ask to promote types
 that are already identical.

 Base.promote_rule(::Type{$name}, ::Type{$name}) = $n_type

 It seems like there are a couple of issues this raises, and I know I've
 seen similar questions on this list as people implement new kinds of
 things, e.g. exotic matrices.

 1. One possibility would be to allow an implicit promotion, perhaps
 expressed as the self-promotion above. I say I'm a Int32BigEndian, or
 CompressedVector, or what have you, and provide a way to turn me into an
 Int32 or Vector implicitly to take advantage of all the functions already
 written on those types. I'm not sure this is a great option for the
 language since it's been explicitly avoided elsewhere. but I'm curious if
 there have been any discussions in this direction

 2. If instead I want to say this new type acts like an Integer, there's
 no canonical place for me to find out what all the functions are I need to
 implement. Ultimately, these are like Haskell's typeclasses, Ord, Eq, etc.
 By trial and error, we can determine many of them and implement them this
 way

 macro as_number(name, n_type)
  quote
 global +(x::$name, y::$name) = +(convert($n_type, x),
 convert($n_type, y))
 global *(x::$name, y::$name) = *(convert($n_type, x),
 convert($n_type, y))
 global -(x::$name, y::$name) = -(convert($n_type, x),
 convert($n_type, y))
 global -(x::$name) = -convert($n_type, x)
 global /(x::$name, y::$name) = /(convert($n_type, x),
 convert($n_type, y))
 global ^(x::$name, y::$name) = ^(convert($n_type, x),
 convert($n_type, y))
 global ==(x::$name, y::$name) = (==)(convert($n_type, x),
 convert($n_type, y))
 global  (x::$name, y::$name) = ( )(convert($n_type, x),
 convert($n_type, y))
 Base.flipsign(x::$name, y::$name) = Base.flipsign(convert($n_type,
 x), convert($n_type, y))
 end
 end

 But I don't know if I've found them all, and my guesses may well change as
 implementation details inside the base library change. Gradual typing is
 great, but with such a powerful base library already in place, it would be

Re: [julia-users] constructing ArrayViews with indexers that are not representable as a range

2014-11-21 Thread Ján Dolinský
Thanks for this tip.

I am OK at the moment on 0.3 with e.g. X[:, 2,7,8,10] or X[:, 2,7,7,10]. It 
creates a copy but it is good enough at the moment.

Thanks,
Jan  



Re: [julia-users] InPlace function writting into a subarray

2014-11-21 Thread Ján Dolinský
Lovely :)


Re: [julia-users] calling libraries of c++ or fortran90 code in Julia

2014-11-21 Thread Erik Schnetter
Ben

It could be that you need to leave out the leading underscore in the
ccall expression.

-erik

On Fri, Nov 21, 2014 at 10:13 AM, Ben Arthur bjarthu...@gmail.com wrote:
 i'm trying to use readelf and ccall as suggested for a C++ shared library,
 and get the following error. any ideas? thanks.


 $ readelf -sW /path/to/lib/libengine.so | grep resampler | grep init

 101: 3f10 832 FUNC LOCAL DEFAULT 11
 _ZL7initGPUP9resamplerPKjS2_j

 102: 4dc4 8 OBJECT LOCAL DEFAULT 13
 _ZZL7initGPUP9resamplerPKjS2_jE12__FUNCTION__



 julia
 dlopen(/path/to/lib/libengine.so,RTLD_LAZY|RTLD_DEEPBIND|RTLD_GLOBAL)

 Ptr{Void} @0x06fc9c40


 julia ccall((:_ZL7initGPUP9resamplerPKjS2_j, /path/to/lib/libengine.so),
 Int, (Ptr{Void},Ptr{Cuint},Ptr{Cuint},Cuint), arg1,arg2,arg3,arg4)

 ERROR: ccall: could not find function _ZL7initGPUP9resamplerPKjS2_j in
 library /path/to/lib/libengine.so


 in anonymous at no file


 julia ccall((:_ZZL7initGPUP9resamplerPKjS2_jE12__FUNCTION__,
 /path/to/lib/libengine.so), Int, (Ptr{Void},Ptr{Cuint},Ptr{Cuint},Cuint),
 arg1,arg2,arg3,arg4)

 ERROR: ccall: could not find function
 _ZZL7initGPUP9resamplerPKjS2_jE12__FUNCTION__ in library
 /path/to/lib/libengine.so

 in anonymous at no file



-- 
Erik Schnetter schnet...@cct.lsu.edu
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Re: Simple Finite Difference Methods

2014-11-21 Thread Steven G. Johnson
I prefer to construct multidimensional Laplacian matrices from the 1d ones 
via Kronecker products, and the 1d ones from -D'*D where D is a 1d 
first-derivative matrix, to make the structure (symmetric-definite!) and 
origin of the matrices clearer.   I've posted a notebook showing some 
examples from my 18.303 class at MIT:

http://nbviewer.ipython.org/url/math.mit.edu/~stevenj/18.303/lecture-10.ipynb

(It would be nicer to construct the sdiff1 function so that the matrix is 
sparse from the beginning, rather than converting from a dense matrix.  But 
I was lazy and dense matrices on 1d grids are cheap.)


Re: [julia-users] calling libraries of c++ or fortran90 code in Julia

2014-11-21 Thread Ben Arthur
i should've mentioned i had tried removing the underscore already.  didn't 
work.


[julia-users] Re: PyPlot.plot_date: How do I alter the time format?

2014-11-21 Thread Steven G. Johnson
If you post what you did in Python, we should be able to translate it to 
Julia.


[julia-users] Re: Mutate C struct represented as Julia immutable

2014-11-21 Thread Steven G. Johnson

On Thursday, November 20, 2014 5:37:22 PM UTC-5, Eric Davies wrote:

 (For context, I'm working on this issue: 
 https://github.com/JuliaOpt/ECOS.jl/issues/12 and dealing with these 
 structs: 
 https://github.com/JuliaOpt/ECOS.jl/blob/master/src/types.jl#L124-L216 )

 I have a C struct used in a 3rd-party C library mirrored as an immutable 
 in Julia. A pointer to the C struct is returned in another C struct, and I 
 get the immutable using pointer_to_array(...)[1]. I want to be able to 
 modify fields of the struct in-place, but immutables disallow this. How do 
 I go about this? 


Why not just use pointer_to_array(...)[1] = ...new immutable..., or 
unsafe_store!(pointer-to-immutable, new immutable) to store a new struct 
(build from the old struct + modifications) in the old location?


Re: [julia-users] Typeclass implementation

2014-11-21 Thread Jiahao Chen
 If instead I want to say this new type acts like an Integer, there's no
canonical place for me to find out what all the functions are I need to
implement.

The closest thing we have now is methodswith(Integer)
and methodswith(Integer, true) (the latter gives also all the methods that
Integer inherits from its supertypes).

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory

On Fri, Nov 21, 2014 at 9:54 AM, Sebastian Good 
sebast...@palladiumconsulting.com wrote:

 I will look into Traits.jl -- interesting package.

 To get traction and some of the great power of comparability, the base
 library will need to be carefully decomposed into traits, which (as noted
 in some of the issue conversations on github) takes you straight to the
 great research Haskell is doing in this area.

 *Sebastian Good*


 On Fri, Nov 21, 2014 at 9:38 AM, John Myles White 
 johnmyleswh...@gmail.com wrote:

 This sounds a bit like a mix of two problems:

 (1) A lack of interfaces:

  - a) A lack of formal interfaces, which will hopefully be addressed by
 something like Traits.jl at some point. (
 https://github.com/JuliaLang/julia/issues/6975)

  - b) A lack of documentation for informal interfaces, such as the
 methods that AbstractArray objects must implement.

 (2) A lack of delegation when you make wrapper types:
 https://github.com/JuliaLang/julia/pull/3292

 The first has moved forward a bunch thanks to Mauro's work. The second
 has not gotten much further, although Kevin Squire wrote a different
 delegate macro that's noticeably better than the draft I wrote.

  -- John

 On Nov 21, 2014, at 2:31 PM, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:

 In implementing new kinds of numbers, I've found it difficult to know
 just how many functions I need to implement for the general library to
 just work on them. Take as an example a byte-swapped, e.g. big-endian,
 integer. This is handy when doing memory-mapped I/O on a file with data
 written in network order. It would be nice to just implement, say,
 Int32BigEndian and have it act like a real number. (Then I could just
 reinterpret a mmaped array and work directly off it) In general, we'd
 convert to Int32 at the earliest opportunity we had. For instance the
 following macro introduces a new type which claims to be derived from
 $base_type, and implements conversions and promotion rules to get it into a
 native form ($n_type) whenever it's used.

 macro encoded_bitstype(name, base_type, bits_type, n_type, to_n, from_n)
 quote
 immutable $name : $base_type
 bits::$bits_type
 end

 Base.bits(x::$name) = bits(x.bits)
 Base.bswap(x::$name) = $name(bswap(x.bits))

 Base.convert(::Type{$n_type}, x::$name) = $to_n(x.bits)
 Base.convert(::Type{$name}, x::$n_type) = $name($from_n(x))
 Base.promote_rule(::Type{$name}, ::Type{$n_type}) = $n_type
 Base.promote_rule(::Type{$name}, ::Type{$base_type}) = $n_type
 end
 end

 I can use it like this

 @encoded_bitstype(Int32BigEndian, Signed, Int32, Int32, bswap, bswap)

 But unfortunately, it doesn't work out of the box because the conversions
 need to be explicit. I noticed that many of the math functions promote
 their arguments to a common type, but the following trick doesn't work,
 presumably because the promotion algorithm doesn't ask to promote types
 that are already identical.

 Base.promote_rule(::Type{$name}, ::Type{$name}) = $n_type

 It seems like there are a couple of issues this raises, and I know I've
 seen similar questions on this list as people implement new kinds of
 things, e.g. exotic matrices.

 1. One possibility would be to allow an implicit promotion, perhaps
 expressed as the self-promotion above. I say I'm a Int32BigEndian, or
 CompressedVector, or what have you, and provide a way to turn me into an
 Int32 or Vector implicitly to take advantage of all the functions already
 written on those types. I'm not sure this is a great option for the
 language since it's been explicitly avoided elsewhere. but I'm curious if
 there have been any discussions in this direction

 2. If instead I want to say this new type acts like an Integer, there's
 no canonical place for me to find out what all the functions are I need to
 implement. Ultimately, these are like Haskell's typeclasses, Ord, Eq, etc.
 By trial and error, we can determine many of them and implement them this
 way

 macro as_number(name, n_type)
  quote
 global +(x::$name, y::$name) = +(convert($n_type, x),
 convert($n_type, y))
 global *(x::$name, y::$name) = *(convert($n_type, x),
 convert($n_type, y))
 global -(x::$name, y::$name) = -(convert($n_type, x),
 convert($n_type, y))
 global -(x::$name) = -convert($n_type, x)
 global /(x::$name, y::$name) = /(convert($n_type, x),
 convert($n_type, y))
 global ^(x::$name, y::$name) = ^(convert($n_type, x),
 

Re: [julia-users] calling libraries of c++ or fortran90 code in Julia

2014-11-21 Thread Jeff Waller



*bizarro% echo __ZZL7initGPUP9resamplerPKjS2_jE12__FUNCTION__01 
|c++filt**initGPU(resampler*, 
unsigned int const*, unsigned int const*, unsigned int)::__FUNCTION__*

I'm not entirely sure what that even means (virtual table?), but you know 
what? I bet you compiled this with clang because when I do that same thing 
on Linux, it's like wa?. Because of that, and exceptions, and virtual 
table layout, and the standard calling for no standard ABI, I agree, can't 
you wrap all the things you want to export with extern C?


Re: [julia-users] [ANN, x-post julia-stats] Mocha.jl Deep Learning Library for Julia

2014-11-21 Thread Chiyuan Zhang
Hi,

About testing: it might be interesting to port Caffe's demo into Mocha to 
see how it works. Currently we already have two: MNIST and CIFAR10.

In terms of performance: the Mocha CPU with native extension performs 
similarly to Caffe on my very rough test. On MNIST, both GPU backend 
performs similarly. On CIFAR-10, which is a larger problem, Mocha GPU is a 
little bit slower than Caffe. In theory they should perform similarly as 
both use the cuDNN backend for bottleneck computation (convolution and 
pooling). The reason why caffe is currently a bit faster might be because 
Caffe use CUDA's multi-stream to better utilize parallelization, or 
different way of implementing LRN layer, or any other testing environment 
reasons that I didn't try to control. We might consider adding that, too. 
For even larger examples like imagenet, I haven't done any test yet.

Best,
Chiyuan

On Friday, November 21, 2014 6:22:16 AM UTC-5, René Donner wrote:

 Hi, 

 as I am just in the process of wrapping caffe, this looks really 
 exiciting! I will definitely try this out in the coming days. 

 Are there any specific areas where you would like testing / feedback for 
 now? 

 Do you have an approximate feeling how the performance compares to caffe? 

 Cheers, 

 Rene 



 Am 21.11.2014 um 03:00 schrieb Chiyuan Zhang plu...@gmail.com 
 javascript:: 

  https://github.com/pluskid/Mocha.jl 
  Mocha is a Deep Learning framework for Julia, inspired by the C++ Deep 
 Learning framework Caffe. Since this is the first time I post announcement 
 here, change logs of the last two releases are listed: 
  
  v0.0.2 2014.11.20 
  
  • Infrastructure 
  • Ability to import caffe trained model 
  • Properly release all the allocated resources upon 
 backend shutdown 
  • Network 
  • Sigmoid activation function 
  • Power, Split, Element-wise layers 
  • Local Response Normalization layer 
  • Channel Pooling layer 
  • Dropout Layer 
  • Documentation 
  • Complete MNIST demo 
  • Complete CIFAR-10 demo 
  • Major part of User's Guide 
  v0.0.1 2014.11.13 
  
  • Backend 
  • Pure Julia CPU 
  • Julia + C++ Extension CPU 
  • CUDA + cuDNN GPU 
  • Infrastructure 
  • Evaluate on validation set during training 
  • Automaticly saving and recovering from snapshots 
  • Network 
  • Convolution layer, mean and max pooling layer, fully 
 connected layer, softmax loss layer 
  • ReLU activation function 
  • L2 Regularization 
  • Solver 
  • SGD with momentum 
  • Documentation 
  • Demo code of LeNet on MNIST 
  • Tutorial document on the MNIST demo (half finished) 
  
  
  Below is a copy of the README file: 
   
  
  Mocha is a Deep Learning framework for Julia, inspired by the C++ Deep 
 Learning framework Caffe. Mocha support multiple backends: 
  
  • Pure Julia CPU Backend: Implemented in pure Julia; Runs out of 
 the box without any external dependency; Reasonably fast on small models 
 thanks to Julia's LLVM-based just-in-time (JIT) compiler and Performance 
 Annotations that eliminate unnecessary bound checkings. 
  • CPU Backend with Native Extension: Some bottleneck 
 computations (Convolution and Pooling) have C++ implementations. When 
 compiled and enabled, could be faster than the pure Julia backend. 
  • CUDA + cuDNN: An interface to NVidia® cuDNN GPU accelerated 
 deep learning library. When run with CUDA GPU devices, could be much faster 
 depending on the size of the problem (e.g. on MNIST CUDA backend is roughly 
 20 times faster than the pure Julia backend). 
  Installation 
  
  To install the release version, simply run 
  
  Pkg.add(Mocha) 
  
  in Julia console. To install the latest development version, run the 
 following command instead: 
  
  Pkg.clone(https://github.com/pluskid/Mocha.jl.git 
  ) 
  
  Then you can run the built-in unit tests with 
  
  Pkg.test(Mocha) 
  
  to verify that everything is functioning properly on your machine. 
  
  Hello World 
  
  Please refer to the MNIST tutorial on how prepare the MNIST dataset for 
 the following example. The complete code for this example is located at 
 examples/mnist/mnist.jl. See below for detailed documentation of other 
 tutorials and user's guide. 
  
  using 
   Mocha 
  
  data   
  = 
 HDF5DataLayer(name=train-data,source=train-data-list.txt,batch_size=64 
  ) 
  conv   
  = 
 ConvolutionLayer(name=conv1,n_filter=20,kernel=(5,5),bottoms=[:data],tops=[:conv
  

  ]) 
  pool   
  = 
 PoolingLayer(name=pool1,kernel=(2,2),stride=(2,2),bottoms=[:conv],tops=[:pool
  

  

Re: [julia-users] calling libraries of c++ or fortran90 code in Julia

2014-11-21 Thread Erik Schnetter
This symbol is an object (maybe a lambda?) that is defined inside
another function (init_GPU). However, this isn't what Ben is trying to
call.

-erik

On Fri, Nov 21, 2014 at 10:59 AM, Jeff Waller truth...@gmail.com wrote:
 bizarro% echo __ZZL7initGPUP9resamplerPKjS2_jE12__FUNCTION__01 |c++filt
 initGPU(resampler*, unsigned int const*, unsigned int const*, unsigned
 int)::__FUNCTION__


 I'm not entirely sure what that even means (virtual table?), but you know
 what? I bet you compiled this with clang because when I do that same thing
 on Linux, it's like wa?. Because of that, and exceptions, and virtual
 table layout, and the standard calling for no standard ABI, I agree, can't
 you wrap all the things you want to export with extern C?



-- 
Erik Schnetter schnet...@cct.lsu.edu
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] calling libraries of c++ or fortran90 code in Julia

2014-11-21 Thread Jeff Waller
Oh yea.  I cut-n-pasted the wrong line, well I'll just fix that and 


*echo __ZL7initGPUP9resamplerPKjS2_j|c++filt**initGPU(resampler*, unsigned 
int const*, unsigned int const*, unsigned int)*

different symbol same comment.  


but if it has to be this way, hmm  FUNC LOCAL DEFAULT --- that function 
isn't declared static is it?  


[julia-users] Re: iJulia notebooks, in a few clicks ...

2014-11-21 Thread Kyle Kelley
They're shared across users. What's reported by the system isn't accurate, 
as you get 1/64 of available RAM and CPUs. I'm thinking about how to allow 
for giving users extra capacity when there aren't many 
users. https://github.com/jupyter/tmpnb/issues/107

Note: https://github.com/jupyter/docker-demo-images now has the demo image 
that uses this. If you look in the notebooks directory, I put together a 
Julia notebook (stealing freely from the Gadfly documentation). I'd LOVE to 
get a really good Julia notebook in there instead, as I've been focused on 
the backend pieces.

The same software was used for the Nature demo.


On Saturday, November 8, 2014 11:19:49 PM UTC-6, Viral Shah wrote:

 Those are some pretty beefy machines! Are those shared across multiple 
 users, or does every user get a monster? :-)

 -viral

 On Sunday, November 9, 2014 10:05:49 AM UTC+5:30, cdm wrote:


 point a browser to

https://tmpnb.org/


 click the New Notebook button
 in the upper right area of the page.

 use the drop down menu in the
 upper right to change the kernel
 from Python to Julia ...

 run some Julia code, like

   Base.banner()

 just remember the notebook
 convention of Shift-Enter to
 evaluate ...

 nice.

 cdm



[julia-users] Segfault when passing large arrays to Julia from C

2014-11-21 Thread David Smith
The following code works as posted, but when I increase N to 1000, I 
get a segfault. I don't see why that would break anything, as the size_t 
type should be large enough to handle that.

Can someone point me in the right direction?

Thanks!


#include julia.h 
#include stdio.h 

int main() 
{ 
size_t N = 100; 
jl_init(NULL); 
JL_SET_STACK_BASE; 
jl_value_t* array_type = jl_apply_array_type( jl_float64_type, 1 ); 
jl_array_t* x = jl_alloc_array_1d(array_type, N); 
jl_array_t* y = jl_alloc_array_1d(array_type, N); 
JL_GC_PUSH2(x, y); 
double* xData = jl_array_data(x); 
double* yData = jl_array_data(y); 
for (size_t i=0; ijl_array_len(x); i++){ 
xData[i] = i; 
yData[i] = i; 
} 
jl_eval_string(myfft(x,y) = fft(complex(x,y))); 
jl_function_t *func = jl_get_function(jl_current_module, myfft); 
jl_value_t* jlret = jl_call2(func, (jl_value_t*) x, (jl_value_t*)y); 
double *ret = jl_unbox_voidpointer(jlret); 
for(size_t i=0; i10; i++) 
printf((%f,%f) = %f + %fi\n, xData[i], yData[i], ret[2*i], 
ret[2*i+1]); 
JL_GC_POP(); 
return 0; 
}


Re: [julia-users] Segfault when passing large arrays to Julia from C

2014-11-21 Thread Jameson Nash
You are missing a gc root for x when you allocate y. I can't say if that is
the only issue, since you seem to be unboxing a jl_array_t* using
jl_unbox_voidpointer
On Fri, Nov 21, 2014 at 1:51 PM David Smith david.sm...@gmail.com wrote:

 The following code works as posted, but when I increase N to 1000, I
 get a segfault. I don't see why that would break anything, as the size_t
 type should be large enough to handle that.

 Can someone point me in the right direction?

 Thanks!


 #include julia.h
 #include stdio.h

 int main()
 {
 size_t N = 100;
 jl_init(NULL);
 JL_SET_STACK_BASE;
 jl_value_t* array_type = jl_apply_array_type( jl_float64_type, 1 );
 jl_array_t* x = jl_alloc_array_1d(array_type, N);
 jl_array_t* y = jl_alloc_array_1d(array_type, N);
 JL_GC_PUSH2(x, y);
 double* xData = jl_array_data(x);
 double* yData = jl_array_data(y);
 for (size_t i=0; ijl_array_len(x); i++){
 xData[i] = i;
 yData[i] = i;
 }
 jl_eval_string(myfft(x,y) = fft(complex(x,y)));
 jl_function_t *func = jl_get_function(jl_current_module, myfft);
 jl_value_t* jlret = jl_call2(func, (jl_value_t*) x, (jl_value_t*)y);
 double *ret = jl_unbox_voidpointer(jlret);
 for(size_t i=0; i10; i++)
 printf((%f,%f) = %f + %fi\n, xData[i], yData[i], ret[2*i],
 ret[2*i+1]);
 JL_GC_POP();
 return 0;
 }



Re: [julia-users] Broadcasting variables

2014-11-21 Thread Tim Holy
My experiments with parallelism tend to occur in focused blocks, and I haven't 
done it in quite a while. So I doubt I can help much. But in general I suspect 
you're encountering these problems because much of the IPC goes through 
thunks, and so a lot of stuff gets reclaimed when execution is done.

If I were experimenting, I'd start by trying to create RemoteRef()s and put!
()ing my variables into them. Then perhaps you might be able to fetch them 
from other processes. Not sure that will work, but it seems to be worth a try.

HTH,
--Tim

On Thursday, November 20, 2014 08:20:19 PM Madeleine Udell wrote:
 I'm trying to use parallelism in julia for a task with a structure that I
 think is quite pervasive. It looks like this:
 
 # broadcast lists of functions f and g to all processes so they're
 available everywhere
 # create shared arrays X,Y on all processes so they're available everywhere
 for iteration=1:1000
 @parallel for i=1:size(X)
 X[i] = f[i](Y)
 end
 @parallel for j=1:size(Y)
 Y[j] = g[j](X)
 end
 end
 
 I'm having trouble making this work, and I'm not sure where to dig around
 to find a solution. Here are the difficulties I've encountered:
 
 * @parallel doesn't allow me to create persistent variables on each
 process; ie, the following results in an error.
 
 s = Base.shmem_rand(12,3)
 @parallel for i=1:nprocs() m,n = size(s) end
 @parallel for i=1:nprocs() println(m) end
 
 * @everywhere does allow me to create persistent variables on each process,
 but doesn't send any data at all, including the variables I need in order
 to define new variables. Eg the following is an error: s is a shared array,
 but the variable (ie pointer to) s is apparently not shared.
 s = Base.shmem_rand(12,3)
 @everywhere m,n = size(s)
 
 Here are the kinds of questions I'd like to see protocode for:
 * How can I broadcast a variable so that it is available and persistent on
 every process?
 * How can I create a reference to the same shared array s that is
 accessible from every process?
 * How can I send a command to be performed in parallel, specifying which
 variables should be sent to the relevant processes and which should be
 looked up in the local namespace?
 
 Note that everything I ask above is not specific to shared arrays; the same
 constructs would also be extremely useful in the distributed case.
 
 --
 
 An interesting partial solution is the following:
 funcs! = Function[x-x[:] = x+k for k=1:3]
 d = drand(3,12)
 let funcs! = funcs!
   @sync @parallel for k in 1:3
 funcs![myid()-1](localpart(d))
   end
 end
 
 Here, I'm not sure why the let statement is necessary to send funcs!, since
 d is sent automatically.
 
 -
 
 Thanks!
 Madeleine



Re: [julia-users] Typeclass implementation

2014-11-21 Thread Mauro
Sebastian, in Haskell, is there a way to get all functions which are
constrained by one or several type classes?  I.e. which functions are
provided by a type-class?  (as opposed to which functions need to be
implemented to belong to a type-class)

On Fri, 2014-11-21 at 16:54, Jiahao Chen jia...@mit.edu wrote:
 If instead I want to say this new type acts like an Integer, there's no
 canonical place for me to find out what all the functions are I need to
 implement.

 The closest thing we have now is methodswith(Integer)
 and methodswith(Integer, true) (the latter gives also all the methods that
 Integer inherits from its supertypes).

 Thanks,

 Jiahao Chen
 Staff Research Scientist
 MIT Computer Science and Artificial Intelligence Laboratory

 On Fri, Nov 21, 2014 at 9:54 AM, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:

 I will look into Traits.jl -- interesting package.

 To get traction and some of the great power of comparability, the base
 library will need to be carefully decomposed into traits, which (as noted
 in some of the issue conversations on github) takes you straight to the
 great research Haskell is doing in this area.

 *Sebastian Good*


 On Fri, Nov 21, 2014 at 9:38 AM, John Myles White 
 johnmyleswh...@gmail.com wrote:

 This sounds a bit like a mix of two problems:

 (1) A lack of interfaces:

  - a) A lack of formal interfaces, which will hopefully be addressed by
 something like Traits.jl at some point. (
 https://github.com/JuliaLang/julia/issues/6975)

  - b) A lack of documentation for informal interfaces, such as the
 methods that AbstractArray objects must implement.

 (2) A lack of delegation when you make wrapper types:
 https://github.com/JuliaLang/julia/pull/3292

 The first has moved forward a bunch thanks to Mauro's work. The second
 has not gotten much further, although Kevin Squire wrote a different
 delegate macro that's noticeably better than the draft I wrote.

  -- John

 On Nov 21, 2014, at 2:31 PM, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:

 In implementing new kinds of numbers, I've found it difficult to know
 just how many functions I need to implement for the general library to
 just work on them. Take as an example a byte-swapped, e.g. big-endian,
 integer. This is handy when doing memory-mapped I/O on a file with data
 written in network order. It would be nice to just implement, say,
 Int32BigEndian and have it act like a real number. (Then I could just
 reinterpret a mmaped array and work directly off it) In general, we'd
 convert to Int32 at the earliest opportunity we had. For instance the
 following macro introduces a new type which claims to be derived from
 $base_type, and implements conversions and promotion rules to get it into a
 native form ($n_type) whenever it's used.

 macro encoded_bitstype(name, base_type, bits_type, n_type, to_n, from_n)
 quote
 immutable $name : $base_type
 bits::$bits_type
 end

 Base.bits(x::$name) = bits(x.bits)
 Base.bswap(x::$name) = $name(bswap(x.bits))

 Base.convert(::Type{$n_type}, x::$name) = $to_n(x.bits)
 Base.convert(::Type{$name}, x::$n_type) = $name($from_n(x))
 Base.promote_rule(::Type{$name}, ::Type{$n_type}) = $n_type
 Base.promote_rule(::Type{$name}, ::Type{$base_type}) = $n_type
 end
 end

 I can use it like this

 @encoded_bitstype(Int32BigEndian, Signed, Int32, Int32, bswap, bswap)

 But unfortunately, it doesn't work out of the box because the conversions
 need to be explicit. I noticed that many of the math functions promote
 their arguments to a common type, but the following trick doesn't work,
 presumably because the promotion algorithm doesn't ask to promote types
 that are already identical.

 Base.promote_rule(::Type{$name}, ::Type{$name}) = $n_type

 It seems like there are a couple of issues this raises, and I know I've
 seen similar questions on this list as people implement new kinds of
 things, e.g. exotic matrices.

 1. One possibility would be to allow an implicit promotion, perhaps
 expressed as the self-promotion above. I say I'm a Int32BigEndian, or
 CompressedVector, or what have you, and provide a way to turn me into an
 Int32 or Vector implicitly to take advantage of all the functions already
 written on those types. I'm not sure this is a great option for the
 language since it's been explicitly avoided elsewhere. but I'm curious if
 there have been any discussions in this direction

 2. If instead I want to say this new type acts like an Integer, there's
 no canonical place for me to find out what all the functions are I need to
 implement. Ultimately, these are like Haskell's typeclasses, Ord, Eq, etc.
 By trial and error, we can determine many of them and implement them this
 way

 macro as_number(name, n_type)
  quote
 global +(x::$name, y::$name) = +(convert($n_type, x),
 convert($n_type, y))
 global *(x::$name, y::$name) = 

Re: [julia-users] Typeclass implementation

2014-11-21 Thread Sebastian Good
I'm not sure I understand the distinction you make. You declare a typeclass
by defining the functions needed to qualify for it, as well as default
implementations. e.g.

class  Eq a  where
  (==), (/=):: a - a - Bool
  x /= y=  not (x == y)

the typeclass 'Eq a' requires implementation of two functions, (==) and
(/=), of type a - a - Bool, which would look like (a,a) -- Bool in the
proposed Julia function type syntax). The (/=) function has a default
implementation in terms of the (==) function, though you could define your
own for your own type if it were an instance of this typeclass.


*Sebastian Good*


On Fri, Nov 21, 2014 at 2:11 PM, Mauro mauro...@runbox.com wrote:

 Sebastian, in Haskell, is there a way to get all functions which are
 constrained by one or several type classes?  I.e. which functions are
 provided by a type-class?  (as opposed to which functions need to be
 implemented to belong to a type-class)

 On Fri, 2014-11-21 at 16:54, Jiahao Chen jia...@mit.edu wrote:
  If instead I want to say this new type acts like an Integer, there's
 no
  canonical place for me to find out what all the functions are I need to
  implement.
 
  The closest thing we have now is methodswith(Integer)
  and methodswith(Integer, true) (the latter gives also all the methods
 that
  Integer inherits from its supertypes).
 
  Thanks,
 
  Jiahao Chen
  Staff Research Scientist
  MIT Computer Science and Artificial Intelligence Laboratory
 
  On Fri, Nov 21, 2014 at 9:54 AM, Sebastian Good 
  sebast...@palladiumconsulting.com wrote:
 
  I will look into Traits.jl -- interesting package.
 
  To get traction and some of the great power of comparability, the base
  library will need to be carefully decomposed into traits, which (as
 noted
  in some of the issue conversations on github) takes you straight to the
  great research Haskell is doing in this area.
 
  *Sebastian Good*
 
 
  On Fri, Nov 21, 2014 at 9:38 AM, John Myles White 
  johnmyleswh...@gmail.com wrote:
 
  This sounds a bit like a mix of two problems:
 
  (1) A lack of interfaces:
 
   - a) A lack of formal interfaces, which will hopefully be addressed by
  something like Traits.jl at some point. (
  https://github.com/JuliaLang/julia/issues/6975)
 
   - b) A lack of documentation for informal interfaces, such as the
  methods that AbstractArray objects must implement.
 
  (2) A lack of delegation when you make wrapper types:
  https://github.com/JuliaLang/julia/pull/3292
 
  The first has moved forward a bunch thanks to Mauro's work. The second
  has not gotten much further, although Kevin Squire wrote a different
  delegate macro that's noticeably better than the draft I wrote.
 
   -- John
 
  On Nov 21, 2014, at 2:31 PM, Sebastian Good 
  sebast...@palladiumconsulting.com wrote:
 
  In implementing new kinds of numbers, I've found it difficult to know
  just how many functions I need to implement for the general library to
  just work on them. Take as an example a byte-swapped, e.g.
 big-endian,
  integer. This is handy when doing memory-mapped I/O on a file with data
  written in network order. It would be nice to just implement, say,
  Int32BigEndian and have it act like a real number. (Then I could just
  reinterpret a mmaped array and work directly off it) In general, we'd
  convert to Int32 at the earliest opportunity we had. For instance the
  following macro introduces a new type which claims to be derived from
  $base_type, and implements conversions and promotion rules to get it
 into a
  native form ($n_type) whenever it's used.
 
  macro encoded_bitstype(name, base_type, bits_type, n_type, to_n,
 from_n)
  quote
  immutable $name : $base_type
  bits::$bits_type
  end
 
  Base.bits(x::$name) = bits(x.bits)
  Base.bswap(x::$name) = $name(bswap(x.bits))
 
  Base.convert(::Type{$n_type}, x::$name) = $to_n(x.bits)
  Base.convert(::Type{$name}, x::$n_type) = $name($from_n(x))
  Base.promote_rule(::Type{$name}, ::Type{$n_type}) = $n_type
  Base.promote_rule(::Type{$name}, ::Type{$base_type}) = $n_type
  end
  end
 
  I can use it like this
 
  @encoded_bitstype(Int32BigEndian, Signed, Int32, Int32, bswap, bswap)
 
  But unfortunately, it doesn't work out of the box because the
 conversions
  need to be explicit. I noticed that many of the math functions promote
  their arguments to a common type, but the following trick doesn't work,
  presumably because the promotion algorithm doesn't ask to promote types
  that are already identical.
 
  Base.promote_rule(::Type{$name}, ::Type{$name}) = $n_type
 
  It seems like there are a couple of issues this raises, and I know I've
  seen similar questions on this list as people implement new kinds of
  things, e.g. exotic matrices.
 
  1. One possibility would be to allow an implicit promotion, perhaps
  expressed as the self-promotion above. I say I'm a Int32BigEndian, or
  

[julia-users] Re: iJulia notebooks, in a few clicks ...

2014-11-21 Thread cdm

a respectable start to a really good Julia notebook list might include
this offering from the Julia 0.3 Release Announcement ...

*Topical highlights*

“The colors of chemistry 
http://jiahao.github.io/julia-blog/2014/06/09/the-colors-of-chemistry.html” 
notebook by Jiahao Chen http://github.com/jiahao demonstrating IJulia, 
Gadfly, dimensional computation with SIUnits, and more.
  
 http://jiahao.github.io/julia-blog/2014/06/09/the-colors-of-chemistry.html


enjoy !!!

cdm


On Friday, November 21, 2014 10:35:26 AM UTC-8, Kyle Kelley wrote:


 Note: https://github.com/jupyter/docker-demo-images now has the demo 
 image that uses this. If you look in the notebooks directory, I put 
 together a Julia notebook (stealing freely from the Gadfly documentation). 
 I'd LOVE to get a really good Julia notebook in there instead, as I've been 
 focused on the backend pieces.



[julia-users] Re: iJulia notebooks, in a few clicks ...

2014-11-21 Thread cdm

additional sources for potential list elements:

   https://github.com/stevengj/Julia-EuroSciPy14

   https://github.com/JuliaCon/presentations


best,

cdm


On Friday, November 21, 2014 11:24:51 AM UTC-8, cdm wrote:


 a respectable start to a really good Julia notebook list might include
 this offering from the Julia 0.3 Release Announcement ...

 *Topical highlights*

 “The colors of chemistry 
 http://jiahao.github.io/julia-blog/2014/06/09/the-colors-of-chemistry.html” 
 notebook by Jiahao Chen http://github.com/jiahao demonstrating IJulia, 
 Gadfly, dimensional computation with SIUnits, and more.

 http://jiahao.github.io/julia-blog/2014/06/09/the-colors-of-chemistry.html


 enjoy !!!

 cdm


 On Friday, November 21, 2014 10:35:26 AM UTC-8, Kyle Kelley wrote:


 Note: https://github.com/jupyter/docker-demo-images now has the demo 
 image that uses this. If you look in the notebooks directory, I put 
 together a Julia notebook (stealing freely from the Gadfly documentation). 
 I'd LOVE to get a really good Julia notebook in there instead, as I've been 
 focused on the backend pieces.



Re: [julia-users] Dict performance with String keys

2014-11-21 Thread Stefan Karpinski
I expect we may be able to swap in a new String representation in a month
and let the breakage begin. Any code relies on the internal representation
of Strings being specifically Array{Uint8,1} will break, in particular
using the mutability thereof will fail. The I/O stuff is going to take a
lot more work to make things really efficient. The end goal for me is
zero-copy I/O.

On Thu, Nov 20, 2014 at 10:58 PM, Pontus Stenetorp pon...@stenetorp.se
wrote:

 On 21 November 2014 03:41, Stefan Karpinski ste...@karpinski.org wrote:
 
  I'm currently working on an overhaul of byte vectors and strings, which
 will
  be followed by an overhaul of I/O (how one typically gets byte vectors).
 It
  will take a bit of time but all things string and I/O related should be
 much
  more efficient once I'm done. There's a lot of work to do though...

 No worries, I think we all understand the amount of work involved in
 getting this right and are all cheering for you.  I have considered
 pitching in with at least some large-scale text processing benchmarks
 along the lines of what was discussed in #8826.  Hopefully I can get
 to this once I get the majority of my current PRs and contributions
 settled.

 Pontus



Re: [julia-users] Typeclass implementation

2014-11-21 Thread Mauro
Yep, defining == is needed to implement Eq.  But then is there a way to
query what functions are constrained by Eq?  For instance, give me a
list of all functions which Eq provides, i.e. with type: Eq(a) = ...

This would be similar to methodswith in Julia, although methodswith
returns both the implementation functions as well as the provided
functions.  Anyway, I was just wondering.

On Fri, 2014-11-21 at 20:21, Sebastian Good sebast...@palladiumconsulting.com 
wrote:
 I'm not sure I understand the distinction you make. You declare a typeclass
 by defining the functions needed to qualify for it, as well as default
 implementations. e.g.

 class  Eq a  where
   (==), (/=):: a - a - Bool
   x /= y=  not (x == y)

 the typeclass 'Eq a' requires implementation of two functions, (==) and
 (/=), of type a - a - Bool, which would look like (a,a) -- Bool in the
 proposed Julia function type syntax). The (/=) function has a default
 implementation in terms of the (==) function, though you could define your
 own for your own type if it were an instance of this typeclass.


 *Sebastian Good*


 On Fri, Nov 21, 2014 at 2:11 PM, Mauro mauro...@runbox.com wrote:

 Sebastian, in Haskell, is there a way to get all functions which are
 constrained by one or several type classes?  I.e. which functions are
 provided by a type-class?  (as opposed to which functions need to be
 implemented to belong to a type-class)

 On Fri, 2014-11-21 at 16:54, Jiahao Chen jia...@mit.edu wrote:
  If instead I want to say this new type acts like an Integer, there's
 no
  canonical place for me to find out what all the functions are I need to
  implement.
 
  The closest thing we have now is methodswith(Integer)
  and methodswith(Integer, true) (the latter gives also all the methods
 that
  Integer inherits from its supertypes).
 
  Thanks,
 
  Jiahao Chen
  Staff Research Scientist
  MIT Computer Science and Artificial Intelligence Laboratory
 
  On Fri, Nov 21, 2014 at 9:54 AM, Sebastian Good 
  sebast...@palladiumconsulting.com wrote:
 
  I will look into Traits.jl -- interesting package.
 
  To get traction and some of the great power of comparability, the base
  library will need to be carefully decomposed into traits, which (as
 noted
  in some of the issue conversations on github) takes you straight to the
  great research Haskell is doing in this area.
 
  *Sebastian Good*
 
 
  On Fri, Nov 21, 2014 at 9:38 AM, John Myles White 
  johnmyleswh...@gmail.com wrote:
 
  This sounds a bit like a mix of two problems:
 
  (1) A lack of interfaces:
 
   - a) A lack of formal interfaces, which will hopefully be addressed by
  something like Traits.jl at some point. (
  https://github.com/JuliaLang/julia/issues/6975)
 
   - b) A lack of documentation for informal interfaces, such as the
  methods that AbstractArray objects must implement.
 
  (2) A lack of delegation when you make wrapper types:
  https://github.com/JuliaLang/julia/pull/3292
 
  The first has moved forward a bunch thanks to Mauro's work. The second
  has not gotten much further, although Kevin Squire wrote a different
  delegate macro that's noticeably better than the draft I wrote.
 
   -- John
 
  On Nov 21, 2014, at 2:31 PM, Sebastian Good 
  sebast...@palladiumconsulting.com wrote:
 
  In implementing new kinds of numbers, I've found it difficult to know
  just how many functions I need to implement for the general library to
  just work on them. Take as an example a byte-swapped, e.g.
 big-endian,
  integer. This is handy when doing memory-mapped I/O on a file with data
  written in network order. It would be nice to just implement, say,
  Int32BigEndian and have it act like a real number. (Then I could just
  reinterpret a mmaped array and work directly off it) In general, we'd
  convert to Int32 at the earliest opportunity we had. For instance the
  following macro introduces a new type which claims to be derived from
  $base_type, and implements conversions and promotion rules to get it
 into a
  native form ($n_type) whenever it's used.
 
  macro encoded_bitstype(name, base_type, bits_type, n_type, to_n,
 from_n)
  quote
  immutable $name : $base_type
  bits::$bits_type
  end
 
  Base.bits(x::$name) = bits(x.bits)
  Base.bswap(x::$name) = $name(bswap(x.bits))
 
  Base.convert(::Type{$n_type}, x::$name) = $to_n(x.bits)
  Base.convert(::Type{$name}, x::$n_type) = $name($from_n(x))
  Base.promote_rule(::Type{$name}, ::Type{$n_type}) = $n_type
  Base.promote_rule(::Type{$name}, ::Type{$base_type}) = $n_type
  end
  end
 
  I can use it like this
 
  @encoded_bitstype(Int32BigEndian, Signed, Int32, Int32, bswap, bswap)
 
  But unfortunately, it doesn't work out of the box because the
 conversions
  need to be explicit. I noticed that many of the math functions promote
  their arguments to a common type, but the following trick doesn't work,
  presumably because 

Re: [julia-users] Typeclass implementation

2014-11-21 Thread Sebastian Good
Ah, that I'm not sure of. There is no run-time reflection in Haskell,
though I don't doubt that artifacts of frightful intelligence exist to do
what you ask. Hoogle is a good place to start for that sort of thing.

Though methodswith is of limited use for determine a minimal
implementation. For instance, in my example, I can avoid implementing abs
because I define flipsign. When defining an encoded floating point, tan
magically works, but atan doesn't.

*Sebastian Good*


On Fri, Nov 21, 2014 at 2:47 PM, Mauro mauro...@runbox.com wrote:

 Yep, defining == is needed to implement Eq.  But then is there a way to
 query what functions are constrained by Eq?  For instance, give me a
 list of all functions which Eq provides, i.e. with type: Eq(a) = ...

 This would be similar to methodswith in Julia, although methodswith
 returns both the implementation functions as well as the provided
 functions.  Anyway, I was just wondering.

 On Fri, 2014-11-21 at 20:21, Sebastian Good 
 sebast...@palladiumconsulting.com wrote:
  I'm not sure I understand the distinction you make. You declare a
 typeclass
  by defining the functions needed to qualify for it, as well as default
  implementations. e.g.
 
  class  Eq a  where
(==), (/=):: a - a - Bool
x /= y=  not (x == y)
 
  the typeclass 'Eq a' requires implementation of two functions, (==) and
  (/=), of type a - a - Bool, which would look like (a,a) -- Bool in the
  proposed Julia function type syntax). The (/=) function has a default
  implementation in terms of the (==) function, though you could define
 your
  own for your own type if it were an instance of this typeclass.
 
 
  *Sebastian Good*
 
 
  On Fri, Nov 21, 2014 at 2:11 PM, Mauro mauro...@runbox.com wrote:
 
  Sebastian, in Haskell, is there a way to get all functions which are
  constrained by one or several type classes?  I.e. which functions are
  provided by a type-class?  (as opposed to which functions need to be
  implemented to belong to a type-class)
 
  On Fri, 2014-11-21 at 16:54, Jiahao Chen jia...@mit.edu wrote:
   If instead I want to say this new type acts like an Integer,
 there's
  no
   canonical place for me to find out what all the functions are I need
 to
   implement.
  
   The closest thing we have now is methodswith(Integer)
   and methodswith(Integer, true) (the latter gives also all the methods
  that
   Integer inherits from its supertypes).
  
   Thanks,
  
   Jiahao Chen
   Staff Research Scientist
   MIT Computer Science and Artificial Intelligence Laboratory
  
   On Fri, Nov 21, 2014 at 9:54 AM, Sebastian Good 
   sebast...@palladiumconsulting.com wrote:
  
   I will look into Traits.jl -- interesting package.
  
   To get traction and some of the great power of comparability, the
 base
   library will need to be carefully decomposed into traits, which (as
  noted
   in some of the issue conversations on github) takes you straight to
 the
   great research Haskell is doing in this area.
  
   *Sebastian Good*
  
  
   On Fri, Nov 21, 2014 at 9:38 AM, John Myles White 
   johnmyleswh...@gmail.com wrote:
  
   This sounds a bit like a mix of two problems:
  
   (1) A lack of interfaces:
  
- a) A lack of formal interfaces, which will hopefully be
 addressed by
   something like Traits.jl at some point. (
   https://github.com/JuliaLang/julia/issues/6975)
  
- b) A lack of documentation for informal interfaces, such as the
   methods that AbstractArray objects must implement.
  
   (2) A lack of delegation when you make wrapper types:
   https://github.com/JuliaLang/julia/pull/3292
  
   The first has moved forward a bunch thanks to Mauro's work. The
 second
   has not gotten much further, although Kevin Squire wrote a different
   delegate macro that's noticeably better than the draft I wrote.
  
-- John
  
   On Nov 21, 2014, at 2:31 PM, Sebastian Good 
   sebast...@palladiumconsulting.com wrote:
  
   In implementing new kinds of numbers, I've found it difficult to
 know
   just how many functions I need to implement for the general library
 to
   just work on them. Take as an example a byte-swapped, e.g.
  big-endian,
   integer. This is handy when doing memory-mapped I/O on a file with
 data
   written in network order. It would be nice to just implement, say,
   Int32BigEndian and have it act like a real number. (Then I could
 just
   reinterpret a mmaped array and work directly off it) In general,
 we'd
   convert to Int32 at the earliest opportunity we had. For instance
 the
   following macro introduces a new type which claims to be derived
 from
   $base_type, and implements conversions and promotion rules to get it
  into a
   native form ($n_type) whenever it's used.
  
   macro encoded_bitstype(name, base_type, bits_type, n_type, to_n,
  from_n)
   quote
   immutable $name : $base_type
   bits::$bits_type
   end
  
   Base.bits(x::$name) = bits(x.bits)
   

[julia-users] Memory mapping composite type + fixed length strings

2014-11-21 Thread Joshua Adelman
I'm playing around with Julia for the first time in an attempt to see if I 
can replace a Python + Cython component of a system I'm building. Basically 
I have a file of bytes representing a numpy structured/recarray (in memory 
this is an array of structs). This gets memory mapped into a numpy array as 
(Python code):

f = open(data_file, 'r+')
cmap = mmap.mmap(f.fileno(), nbytes)
data_array = np.ndarray(size, dtype=dtype, buffer=cmap)


where dtype=[('x', np.int32), ('y', np.float64), ('name', 'S17')].

In cython I would create a C packed struct and to deal with the fixed 
length string elements, I would specify them as char[N] arrays:

cdef packed struct atype:
np.int32_t x
np.float64 y
char[17] name

I'm trying to figure out how I would accomplish something similar in Julia. 
Setting aside the issue of the fixed length strings for a moment, I thought 
to initially create a composite type:

immutable AType
x::Int32
y::Float64
name::???
end

and then if I had an file containing 20 records use:

f = open(test1.dat, r)
data = mmap_array(AType, 20, f)

but I get an error:

ERROR: `mmap_array` has no method matching mmap_array(::Type{AType}, 
::Int64, ::IOStream)

Is there a way to memory map a file into an array of custom 
records/composite types in Julia? And if there is, how should one represent 
the fixed length string fields?

Any suggestions would be much appreciated.

Josh



Re: [julia-users] Memory mapping composite type + fixed length strings

2014-11-21 Thread Tim Holy
You'll see why if you type `methods(mmap_array)`: the dims has to be 
represented as a tuple.

Currently, the only way I know of to create a fixed-sized buffer as an element 
of a struct in julia is via immutables with one field per object. Here's one 
example:
https://github.com/JuliaGPU/CUDArt.jl/blob/1742a19b35a52ecec4ee14cfbec823f8bcb22e0f/gen/gen_libcudart_h.jl#L403-L660

It has not escaped notice that this is less than ideal :-).

--Tim

On Friday, November 21, 2014 11:57:10 AM Joshua Adelman wrote:
 I'm playing around with Julia for the first time in an attempt to see if I
 can replace a Python + Cython component of a system I'm building. Basically
 I have a file of bytes representing a numpy structured/recarray (in memory
 this is an array of structs). This gets memory mapped into a numpy array as
 (Python code):
 
 f = open(data_file, 'r+')
 cmap = mmap.mmap(f.fileno(), nbytes)
 data_array = np.ndarray(size, dtype=dtype, buffer=cmap)
 
 
 where dtype=[('x', np.int32), ('y', np.float64), ('name', 'S17')].
 
 In cython I would create a C packed struct and to deal with the fixed
 length string elements, I would specify them as char[N] arrays:
 
 cdef packed struct atype:
 np.int32_t x
 np.float64 y
 char[17] name
 
 I'm trying to figure out how I would accomplish something similar in Julia.
 Setting aside the issue of the fixed length strings for a moment, I thought
 to initially create a composite type:
 
 immutable AType
 x::Int32
 y::Float64
 name::???
 end
 
 and then if I had an file containing 20 records use:
 
 f = open(test1.dat, r)
 data = mmap_array(AType, 20, f)
 
 but I get an error:
 
 ERROR: `mmap_array` has no method matching mmap_array(::Type{AType},
 
 ::Int64, ::IOStream)
 
 Is there a way to memory map a file into an array of custom
 records/composite types in Julia? And if there is, how should one represent
 the fixed length string fields?
 
 Any suggestions would be much appreciated.
 
 Josh



[julia-users] capture STDOUT and STDERR from Julia REPL (without exiting)

2014-11-21 Thread Max Suster

In an effort to improve VideoIO.jl, we are trying to capture both STDOUT 
and STDERR output of a call to an external binary from the Julia REPL. I 
tried a few suggestions based on previous discussions (JuliaLang/julia#3823 
https://github.com/JuliaLang/julia/issues/3823, Capture the output of 
Julia's console 
https://groups.google.com/forum/#!topic/julia-users/3wGChHHYoxo)  on how 
to this but none of them worked from the Julia REPL in this case.

I can direct both STDOUT and STDERR output to a .txt file when calling the 
following from the bash terminal:

$ julia -e 'open(`/usr/local/Cellar/ffmpeg/2.4.2/bin/ffmpeg -list_devices true 
-f avfoundation -i \ \`)' 1 stdout.txt 2 stderr.txt


However, when using the equivalent call from inside the julia REPL:

f=open(`/usr/local/Cellar/ffmpeg/2.4.2/bin/ffmpeg -list_devices true -f 
avfoundation -i \ \`, r, STDERR)readall(f)


ERROR: `readall` has no method matching readall(::(Pipe,Process))


I know there is a way to capture all the output but it requires exiting the 
Julia REPL.   Does anyone know a relatively straightforward way to capture 
STDOUT and STDERR from Julia -- without exiting the Julia REPL?



[julia-users] Re: Contributing to a Julia Package

2014-11-21 Thread Tomas Lycken
Another slightly OT remark, that I believe many might find useful, is that 
the [github hub project](https://github.com/github/hub) is really worth 
checking out.

It's basically a super-set of `git`, intended to be aliased (`alias 
git=hub` on e.g. ubuntu), and it forwards any regular `git` command to 
regular `git`. However, it also adds a bunch of sweet features like `git 
pull-request` which will do all the required steps to create a pull request 
to merge the current branch into `origin/master`, including creating a fork 
etc if it's necessary. You get a url to the pull-request in the UI =)

It requires you to have a Github account and to configure both `git` and 
`hub` to be aware of (and have access to via ssh) that account, but there 
are really nice instructions in the docs for the `hub` project.

// T

On Monday, November 10, 2014 9:58:15 PM UTC+1, Ivar Nesje wrote:

 Another important point (for actively developed packages) is that 
 Pkg.add() checks out the commit of the latest released version registered 
 in METADATA.jl. Most packages do development on the master branch, so you 
 should likely base your changes on master, rather than the latest released 
 version.

 To do this, you can use `Pkg.checkout()`, but `git checkout master` will 
 also work.

 Ivar

 kl. 21:07:49 UTC+1 mandag 10. november 2014 skrev Tim Wheeler følgende:

 Thank you! It seems to have worked.
 Per João's suggestions, I had to:


- Create a fork on Github of the target package repository
- Clone my fork locally
- Create a branch on my local repository
- Add, commit,  push my changes to said branch
- On github I could then submit the pull request from my forked repo 
to the upstream master






 On Monday, November 10, 2014 11:17:55 AM UTC-8, Tim Wheeler wrote:

 Hello Julia Users,

 I wrote some code that I would like to submit via pull request to a 
 Julia package. The thing is, I am new to this and do not understand the 
 pull request process.

 What I have done:

- used Pkg.add to obtain a local version of said package
- ran `git branch mybranch` to create a local git branch 
- created my code additions and used `git add` to include them. Ran 
`git commit -m`

 I am confused over how to continue. The instructions on git for issuing 
 a pull request require that I use their UI interface, but my local branch 
 is not going to show up when I select new pull request because it is, 
 well, local to my machine. Do I need to fork the repository first? When I 
 try creating a branch through the UI I do not get an option to create one 
 like they indicate in the tutorial 
 https://help.github.com/articles/creating-and-deleting-branches-within-your-repository/#creating-a-branch,
  
 perhaps because I am not a repo owner.

 Thank you.



Re: [julia-users] Memory mapping composite type + fixed length strings

2014-11-21 Thread Joshua Adelman
Hi Tim,

Thanks for pointing out my basic error. I can now get some test files read 
that don't contain any string components.

Josh



On Friday, November 21, 2014 3:11:32 PM UTC-5, Tim Holy wrote:

 You'll see why if you type `methods(mmap_array)`: the dims has to be 
 represented as a tuple. 

 Currently, the only way I know of to create a fixed-sized buffer as an 
 element 
 of a struct in julia is via immutables with one field per object. Here's 
 one 
 example: 

 https://github.com/JuliaGPU/CUDArt.jl/blob/1742a19b35a52ecec4ee14cfbec823f8bcb22e0f/gen/gen_libcudart_h.jl#L403-L660
  

 It has not escaped notice that this is less than ideal :-). 

 --Tim 

 On Friday, November 21, 2014 11:57:10 AM Joshua Adelman wrote: 
  I'm playing around with Julia for the first time in an attempt to see if 
 I 
  can replace a Python + Cython component of a system I'm building. 
 Basically 
  I have a file of bytes representing a numpy structured/recarray (in 
 memory 
  this is an array of structs). This gets memory mapped into a numpy array 
 as 
  (Python code): 
  
  f = open(data_file, 'r+') 
  cmap = mmap.mmap(f.fileno(), nbytes) 
  data_array = np.ndarray(size, dtype=dtype, buffer=cmap) 
  
  
  where dtype=[('x', np.int32), ('y', np.float64), ('name', 'S17')]. 
  
  In cython I would create a C packed struct and to deal with the fixed 
  length string elements, I would specify them as char[N] arrays: 
  
  cdef packed struct atype: 
  np.int32_t x 
  np.float64 y 
  char[17] name 
  
  I'm trying to figure out how I would accomplish something similar in 
 Julia. 
  Setting aside the issue of the fixed length strings for a moment, I 
 thought 
  to initially create a composite type: 
  
  immutable AType 
  x::Int32 
  y::Float64 
  name::??? 
  end 
  
  and then if I had an file containing 20 records use: 
  
  f = open(test1.dat, r) 
  data = mmap_array(AType, 20, f) 
  
  but I get an error: 
  
  ERROR: `mmap_array` has no method matching mmap_array(::Type{AType}, 
  
  ::Int64, ::IOStream) 
  
  Is there a way to memory map a file into an array of custom 
  records/composite types in Julia? And if there is, how should one 
 represent 
  the fixed length string fields? 
  
  Any suggestions would be much appreciated. 
  
  Josh 



Re: [julia-users] Re: Shape preserving Interpolation

2014-11-21 Thread Tomas Lycken
Hi Nils (and others),

I just completed some work I've had lying around on 
[Interpolations.jl](https://github.com/tlycken/Interpolations.jl), a 
package which is meant to eventually become `Grid.jl`s heir. The stuff I've 
done so far isn't even merged into master yet (but it hopefully will be 
quite soon), so this is really an early call, but I think there might be 
some infrastructure in this package already that can be useful for lots of 
interpolation types. Besides, it wouldn't be a bad thing to try to gather 
all of these different methods in one place.

`Interpolations.jl` currently only supports [B-splines on regular 
grids](http://en.wikipedia.org/wiki/B-spline#Cardinal_B-spline) (and only 
up to quadratic order, although cubic is in the pipeline), but I would 
definitely be interested in a collaboration effort to add e.g. Hermite 
splines of various degrees as well. I would also like to at least 
investigate how difficult it would be to generalize the approach used there 
to work on irregular grids.

There is quite a ways to feature parity with `Grid.jl`, but at least for 
B-splines most of the basic infrastructure is there, and it's all been 
designed to be easy to extend with new interpolation types. Feel free to 
comment, file issues or pull requests with any ideas or functionality you'd 
like to see.

Regards,

// Tomas

On Friday, November 14, 2014 12:57:19 PM UTC+1, Nils Gudat wrote:

 Hi Tamas,

 Thanks for your input! Indeed it appears that shape preserving 
 interpolation in higher dimensions is a somewhat tricky problem. Most of 
 the literature I've found is in applied maths journals and not a lot seems 
 to have been transferred to economics, although there's a paper by Cai 
 and Judd 
 http://books.google.co.uk/books?id=xDhO6L_Psp8Cpg=PA499lpg=PA499dq=shape+preserving+interpolation+higher+dimensionssource=blots=8yLHXvILy-sig=ykAEER_ahDcCckTBZmfcq1cMQUUhl=ensa=Xei=ktplVOjSDcPmav4Mved=0CDgQ6AEwAg#v=onepageq=shape%20preserving%20interpolation%20higher%20dimensionsf=false
  
 in the Handbook of Computational Economics, Vol. 3. 
 In any case this discussion is not about Julia anymore, but if it turns 
 out I really have to write some form of shape-preserving higher dimensional 
 interpolation algorithm I'll make sure to make it as general as possible so 
 that it can potentially be added to some Julia interpolation package.

 Best,
 Nils



Re: [julia-users] Re: Shape preserving Interpolation

2014-11-21 Thread Tomas Lycken
(And I really need to stop writing markdown and forget to trigger the 
beautification of it...)

On Friday, November 21, 2014 10:19:08 PM UTC+1, Tomas Lycken wrote:

 Hi Nils (and others),

 I just completed some work I've had lying around on [Interpolations.jl](
 https://github.com/tlycken/Interpolations.jl), a package which is meant 
 to eventually become `Grid.jl`s heir. The stuff I've done so far isn't even 
 merged into master yet (but it hopefully will be quite soon), so this is 
 really an early call, but I think there might be some infrastructure in 
 this package already that can be useful for lots of interpolation types. 
 Besides, it wouldn't be a bad thing to try to gather all of these different 
 methods in one place.

 `Interpolations.jl` currently only supports [B-splines on regular grids](
 http://en.wikipedia.org/wiki/B-spline#Cardinal_B-spline) (and only up to 
 quadratic order, although cubic is in the pipeline), but I would definitely 
 be interested in a collaboration effort to add e.g. Hermite splines of 
 various degrees as well. I would also like to at least investigate how 
 difficult it would be to generalize the approach used there to work on 
 irregular grids.

 There is quite a ways to feature parity with `Grid.jl`, but at least for 
 B-splines most of the basic infrastructure is there, and it's all been 
 designed to be easy to extend with new interpolation types. Feel free to 
 comment, file issues or pull requests with any ideas or functionality you'd 
 like to see.

 Regards,

 // Tomas

 On Friday, November 14, 2014 12:57:19 PM UTC+1, Nils Gudat wrote:

 Hi Tamas,

 Thanks for your input! Indeed it appears that shape preserving 
 interpolation in higher dimensions is a somewhat tricky problem. Most of 
 the literature I've found is in applied maths journals and not a lot seems 
 to have been transferred to economics, although there's a paper by Cai 
 and Judd 
 http://books.google.co.uk/books?id=xDhO6L_Psp8Cpg=PA499lpg=PA499dq=shape+preserving+interpolation+higher+dimensionssource=blots=8yLHXvILy-sig=ykAEER_ahDcCckTBZmfcq1cMQUUhl=ensa=Xei=ktplVOjSDcPmav4Mved=0CDgQ6AEwAg#v=onepageq=shape%20preserving%20interpolation%20higher%20dimensionsf=false
  
 in the Handbook of Computational Economics, Vol. 3. 
 In any case this discussion is not about Julia anymore, but if it turns 
 out I really have to write some form of shape-preserving higher dimensional 
 interpolation algorithm I'll make sure to make it as general as possible so 
 that it can potentially be added to some Julia interpolation package.

 Best,
 Nils



Re: [julia-users] Segfault when passing large arrays to Julia from C

2014-11-21 Thread David Smith
Thanks for your reply. I think that fixed it, but I don't understand why.

I played around with the code and used what I saw in julia.h to figure out 
that waiting until after the JL_GC_PUSH2 call to assign to the pointers 
worked. But I don't know why it worked.

(Disclaimer: I know almost nothing about gc.)



On Friday, November 21, 2014 12:08:44 PM UTC-7, Jameson wrote:

 You are missing a gc root for x when you allocate y. I can't say if that 
 is the only issue, since you seem to be unboxing a jl_array_t* using 
 jl_unbox_voidpointer
 On Fri, Nov 21, 2014 at 1:51 PM David Smith david...@gmail.com 
 javascript: wrote:

 The following code works as posted, but when I increase N to 1000, I 
 get a segfault. I don't see why that would break anything, as the size_t 
 type should be large enough to handle that.

 Can someone point me in the right direction?

 Thanks!


 #include julia.h 
 #include stdio.h 

 int main() 
 { 
 size_t N = 100; 
 jl_init(NULL); 
 JL_SET_STACK_BASE; 
 jl_value_t* array_type = jl_apply_array_type( jl_float64_type, 1 ); 
 jl_array_t* x = jl_alloc_array_1d(array_type, N); 
 jl_array_t* y = jl_alloc_array_1d(array_type, N); 
 JL_GC_PUSH2(x, y); 
 double* xData = jl_array_data(x); 
 double* yData = jl_array_data(y); 
 for (size_t i=0; ijl_array_len(x); i++){ 
 xData[i] = i; 
 yData[i] = i; 
 } 
 jl_eval_string(myfft(x,y) = fft(complex(x,y))); 
 jl_function_t *func = jl_get_function(jl_current_module, myfft); 
 jl_value_t* jlret = jl_call2(func, (jl_value_t*) x, (jl_value_t*)y); 
 double *ret = jl_unbox_voidpointer(jlret); 
 for(size_t i=0; i10; i++) 
 printf((%f,%f) = %f + %fi\n, xData[i], yData[i], ret[2*i], 
 ret[2*i+1]); 
 JL_GC_POP(); 
 return 0; 
 }



Re: [julia-users] calling libraries of c++ or fortran90 code in Julia

2014-11-21 Thread Ben Arthur
there might be a way to fix this, but since i'm in a hurry, i decided to 
just modify the C++ so as to avoid mangling.  thanks for the help.


[julia-users] Oddness in Graphs.jl

2014-11-21 Thread Richard Futrell
Hi all,

Is this expected behavior? It was surprising to me. On 0.4.0-dev+1745, 
pulled today, but I had noticed it previously.

julia using Graphs

# make a graph and add an edge...

julia g1 = graph([1, 2], Edge{Int}[])
Directed Graph (2 vertices, 0 edges)

julia add_edge!(g1, 1, 2)
edge [1]: 1 -- 2

julia edges(g1)
1-element Array{Edge{Int64},1}:
 edge [1]: 1 -- 2

# OK, all is well.
# But how about this graph:

julia g2 = graph([2, 3], Edge{Int}[])
Directed Graph (2 vertices, 0 edges)

julia add_edge!(g2, 2, 3)
ERROR: BoundsError()
 in add_edge! at /Users/canjo/.julia/v0.4/Graphs/src/graph.jl:87
 in add_edge! at /Users/canjo/.julia/v0.4/Graphs/src/graph.jl:98

# Despite giving me an error, it did in fact succesfully add the edge:

julia edges(g2)
1-element Array{Edge{Int64},1}:
 edge [1]: 2 -- 3

What's going on here?

thanks, Richard




[julia-users] Re: Oddness in Graphs.jl

2014-11-21 Thread Dahua Lin
This is something related to the vertex indexing mechanism. Please file an 
issue on Graphs.jl. We may discuss how to solve this over there.

Dahua

On Saturday, November 22, 2014 6:39:47 AM UTC+8, Richard Futrell wrote:

 Hi all,

 Is this expected behavior? It was surprising to me. On 0.4.0-dev+1745, 
 pulled today, but I had noticed it previously.

 julia using Graphs

 # make a graph and add an edge...

 julia g1 = graph([1, 2], Edge{Int}[])
 Directed Graph (2 vertices, 0 edges)

 julia add_edge!(g1, 1, 2)
 edge [1]: 1 -- 2

 julia edges(g1)
 1-element Array{Edge{Int64},1}:
  edge [1]: 1 -- 2

 # OK, all is well.
 # But how about this graph:

 julia g2 = graph([2, 3], Edge{Int}[])
 Directed Graph (2 vertices, 0 edges)

 julia add_edge!(g2, 2, 3)
 ERROR: BoundsError()
  in add_edge! at /Users/canjo/.julia/v0.4/Graphs/src/graph.jl:87
  in add_edge! at /Users/canjo/.julia/v0.4/Graphs/src/graph.jl:98

 # Despite giving me an error, it did in fact succesfully add the edge:

 julia edges(g2)
 1-element Array{Edge{Int64},1}:
  edge [1]: 2 -- 3

 What's going on here?

 thanks, Richard




Re: [julia-users] Typeclass implementation

2014-11-21 Thread Jason Morton
Also check out
https://github.com/jasonmorton/Typeclass.jl
Available from Pkg, which tries to do this.

Re: [julia-users] Broadcasting variables

2014-11-21 Thread Madeleine Udell
My experiments with parallelism also occur in focused blocks; I think
that's a sign that it's not yet as user friendly as it could be.

Here's a solution to the problem I posed that's simple to use: @parallel +
global can be used to broadcast a variable, while @everywhere can be used
to do a computation on local data (ie, without resending the data). I'm not
sure how to do the variable renaming programmatically, though.

# initialize variables
m,n = 10,20
localX = Base.shmem_rand(m)
localY = Base.shmem_rand(n)
localf = [x-i+sum(x) for i=1:m]
localg = [x-i+sum(x) for i=1:n]

# broadcast variables to all worker processes
@parallel for i=workers()
global X = localX
global Y = localY
global f = localf
global g = localg
end
# give variables same name on master
X,Y,f,g = localX,localY,localf,localg

# compute
for iteration=1:10
@everywhere for i=localindexes(X)
X[i] = f[i](Y)
end
@everywhere for j=localindexes(Y)
Y[j] = g[j](X)
end
end

On Fri, Nov 21, 2014 at 11:14 AM, Tim Holy tim.h...@gmail.com wrote:

 My experiments with parallelism tend to occur in focused blocks, and I
 haven't
 done it in quite a while. So I doubt I can help much. But in general I
 suspect
 you're encountering these problems because much of the IPC goes through
 thunks, and so a lot of stuff gets reclaimed when execution is done.

 If I were experimenting, I'd start by trying to create RemoteRef()s and
 put!
 ()ing my variables into them. Then perhaps you might be able to fetch them
 from other processes. Not sure that will work, but it seems to be worth a
 try.

 HTH,
 --Tim

 On Thursday, November 20, 2014 08:20:19 PM Madeleine Udell wrote:
  I'm trying to use parallelism in julia for a task with a structure that I
  think is quite pervasive. It looks like this:
 
  # broadcast lists of functions f and g to all processes so they're
  available everywhere
  # create shared arrays X,Y on all processes so they're available
 everywhere
  for iteration=1:1000
  @parallel for i=1:size(X)
  X[i] = f[i](Y)
  end
  @parallel for j=1:size(Y)
  Y[j] = g[j](X)
  end
  end
 
  I'm having trouble making this work, and I'm not sure where to dig around
  to find a solution. Here are the difficulties I've encountered:
 
  * @parallel doesn't allow me to create persistent variables on each
  process; ie, the following results in an error.
 
  s = Base.shmem_rand(12,3)
  @parallel for i=1:nprocs() m,n = size(s) end
  @parallel for i=1:nprocs() println(m) end
 
  * @everywhere does allow me to create persistent variables on each
 process,
  but doesn't send any data at all, including the variables I need in order
  to define new variables. Eg the following is an error: s is a shared
 array,
  but the variable (ie pointer to) s is apparently not shared.
  s = Base.shmem_rand(12,3)
  @everywhere m,n = size(s)
 
  Here are the kinds of questions I'd like to see protocode for:
  * How can I broadcast a variable so that it is available and persistent
 on
  every process?
  * How can I create a reference to the same shared array s that is
  accessible from every process?
  * How can I send a command to be performed in parallel, specifying which
  variables should be sent to the relevant processes and which should be
  looked up in the local namespace?
 
  Note that everything I ask above is not specific to shared arrays; the
 same
  constructs would also be extremely useful in the distributed case.
 
  --
 
  An interesting partial solution is the following:
  funcs! = Function[x-x[:] = x+k for k=1:3]
  d = drand(3,12)
  let funcs! = funcs!
@sync @parallel for k in 1:3
  funcs![myid()-1](localpart(d))
end
  end
 
  Here, I'm not sure why the let statement is necessary to send funcs!,
 since
  d is sent automatically.
 
  -
 
  Thanks!
  Madeleine




-- 
Madeleine Udell
PhD Candidate in Computational and Mathematical Engineering
Stanford University
www.stanford.edu/~udell


[julia-users] Saving the results of a loop

2014-11-21 Thread Steven G. Johnson
Use print, not write. The write function outputs the raw binary representation 
to the file, while print outputs text. 

Re: [julia-users] Saving the results of a loop

2014-11-21 Thread Isaiah Norton
[Please don't cross-post on julia-users and StackOverflow within such a
short interval -- most of the same people watch both. You will usually get
a fairly quick response on one or the other]

On Fri, Nov 21, 2014 at 7:24 PM, Pileas phoebus.apollo...@gmail.com wrote:

 Hello all,

 I have been trying to learn Julia for I believe that it is a good
 alternative compared to the others that are in the market  nowadays.
 However, since Julia is a new language---and because there is not even one
 book out there to help the novice---, I start this thread here, although my
 question may be very basic.

 So here it is: Assume that I write the following code and I want to save
 my results in a csv file.

 # -- CODE
 ---

 csvfile = open(y_save.csv,w)
 write(csvfile,y, \n)

 # Set up the function
 foo(i) = i^2

 # Start the for loop

 for i = 1:20
 y= foo(i)
 y_save = y
 write(csvfile, y_save,,, \n)
 end

 #   END
 --

 The code that I wrote before is from this site:
 http://comments.gmane.org/gmane.comp.lang.julia.user/21070

 Although I was able to make that work, I do not understand what I am doing
 wrong and the result I get in the csv is not readable.

 I hope someone can provide some assistance with it.
 Thanks in advance!



[julia-users] Macros with multiple expressions - attempting to inject variable as each's first argument

2014-11-21 Thread Tim McNamara
I am attempting to write my first macro. Cairo.jl, following its C roots, 
requires a context argument to be added at the start of many of its drawing 
functions. This results in lots of typing, which I think could be cleaned 
up with a macro.

Here's a snippet from one of the examples[1].

using Cairo
c = CairoRGBSurface(256,256);
cr = CairoContext(c);
...
new_path(cr); # current path is not consumed by cairo_clip()
rectangle(cr, 0, 0, 256, 256);
fill(cr);
set_source_rgb(cr, 0, 1, 0);
move_to(cr, 0, 0);
line_to(cr, 256, 256);
move_to(cr, 256, 0);
line_to(cr, 0, 256);
set_line_width(cr, 10.0);
stroke(cr);

What I would like to be able to do is use a macro to insert a common 
argument into each expression. The result might look something like...

using Cairo
c = CairoRGBSurface(256,256)
cr = CairoContext(c)

@withcontext cr, begin
  ...
  new_path()
  rectangle(0, 0, 256, 256)
  fill()
  set_source_rgb(0, 1, 0)
  move_to(0, 0)
  line_to(256, 256)
  move_to(256, 0)
  line_to(0, 256)
  set_line_width(10.0)
  stroke()
end

I have clumsily got this far.. 

macro withcontext(exprs...)
  common_arg = exprs.args[1]
  
  for i in 2:length(exprs)
expr = exprs[i]
insert!(expr, 2, common_arg)
  end
  return exprs[2:end]
end

..but haven't seen much success. Both lines of this tiny experiment are 
being wrapped up as a single expression.

julia a = [1,2,3,4,5]
julia @withcontext begin
  a 
  methods()
end
ERROR: type: getfield: expected DataType, got (Expr,)

I'm quite new to Julia and newer to macros - is this kind of thing even 
possible? Any suggestions?

With thanks,



Tim McNamara
http://timmcnamara.co.nz

[1] https://github.com/JuliaLang/Cairo.jl/blob/master/samples/sample_clip.jl


[julia-users] Re: Macros with multiple expressions - attempting to inject variable as each's first argument

2014-11-21 Thread Tony Fong
https://github.com/one-more-minute/Lazy.jl is your friend. In particular 
see '@'.

On Saturday, November 22, 2014 9:14:57 AM UTC+7, Tim McNamara wrote:

 I am attempting to write my first macro. Cairo.jl, following its C roots, 
 requires a context argument to be added at the start of many of its drawing 
 functions. This results in lots of typing, which I think could be cleaned 
 up with a macro.

 Here's a snippet from one of the examples[1].

 using Cairo
 c = CairoRGBSurface(256,256);
 cr = CairoContext(c);
 ...
 new_path(cr); # current path is not consumed by cairo_clip()
 rectangle(cr, 0, 0, 256, 256);
 fill(cr);
 set_source_rgb(cr, 0, 1, 0);
 move_to(cr, 0, 0);
 line_to(cr, 256, 256);
 move_to(cr, 256, 0);
 line_to(cr, 0, 256);
 set_line_width(cr, 10.0);
 stroke(cr);

 What I would like to be able to do is use a macro to insert a common 
 argument into each expression. The result might look something like...

 using Cairo
 c = CairoRGBSurface(256,256)
 cr = CairoContext(c)

 @withcontext cr, begin
   ...
   new_path()
   rectangle(0, 0, 256, 256)
   fill()
   set_source_rgb(0, 1, 0)
   move_to(0, 0)
   line_to(256, 256)
   move_to(256, 0)
   line_to(0, 256)
   set_line_width(10.0)
   stroke()
 end

 I have clumsily got this far.. 

 macro withcontext(exprs...)
   common_arg = exprs.args[1]
   
   for i in 2:length(exprs)
 expr = exprs[i]
 insert!(expr, 2, common_arg)
   end
   return exprs[2:end]
 end

 ..but haven't seen much success. Both lines of this tiny experiment are 
 being wrapped up as a single expression.

 julia a = [1,2,3,4,5]
 julia @withcontext begin
   a 
   methods()
 end
 ERROR: type: getfield: expected DataType, got (Expr,)

 I'm quite new to Julia and newer to macros - is this kind of thing even 
 possible? Any suggestions?

 With thanks,



 Tim McNamara
 http://timmcnamara.co.nz

 [1] 
 https://github.com/JuliaLang/Cairo.jl/blob/master/samples/sample_clip.jl