Re: [julia-users] In what version is Julia supposed to mature?

2015-07-30 Thread Tom Breloff
Then what kind of tribal language contest should it be??

On Thu, Jul 30, 2015 at 9:38 AM, Job van der Zwan j.l.vanderz...@gmail.com
wrote:

 Let's not turn this into a tribal language pissing contest, please.


 On Thursday, 30 July 2015 15:15:42 UTC+2, Tony Kelman wrote:

 Hah. Go's definition of systems is totally invalid everywhere in the
 world except inside Google.

 We also have nicer syntax macros than either of those languages. Compat
 might start getting pretty ungainly over time, but we can use REQUIRE to
 deal with that if the version range ever gets too intractable to support
 everything within the same set of macros.


 On Thursday, July 30, 2015 at 6:08:54 AM UTC-7, Job van der Zwan wrote:

 On Wednesday, 29 July 2015 19:00:38 UTC+2, Tony Kelman wrote:

 I guess the waters are a little muddied here lately with Rust having
 recently put such a big emphasis on stability and reaching 1.0, actively
 telling people not to use the language prior to that point, and seemingly
 having really high expectations about how long 1.x will last for. They have
 a much smaller standard library than we do, but I would think trimming ours
 down to the bare minimum would be necessary before calling the language
 1.0. Maybe that could just as well be a 2.0 or 3.0 target instead.


 Go did the same before. I think it's because both position themselves as
 systems languages (with slightly different - but both valid - definitions
 of systems). I don't think the need for stability is quite as important
 for Julia - library maintainers still care of course, but there's not as
 much infrastructure built on top of Julia that depends on guaranteed
 stability.




[julia-users] Re: Irregular Interpolation

2015-07-30 Thread Jude
That's really helpful! I will have a look at all the packages you guys 
referenced and see which one suits my problem best.

Thanks a lot!

On Wednesday, July 29, 2015 at 8:26:35 PM UTC+1, Luke Stagner wrote:

 I wrote some code that did Polyharmonic (Thin-plate) splines feel free to 
 use them https://github.com/tlycken/Interpolations.jl/issues/6

 On Wednesday, July 29, 2015 at 6:16:59 AM UTC-7, Jude wrote:

 Hi,

 I have been using the fantastic grid package by Tim Holy for the past 
 while but I really need to allow for a non-equally spaced grids. It is 
 important to have a bunch of points at some parts of the grid but not at 
 others. I was wondering if anyone knows of any package that allows for 
 irregular interpolation. I know it is possible to do this using Tim's 
 package for one dimension but I want to interpolate in 3 dimensions. Has 
 there been any new packages developed lately or does anyone know a fast way 
 to do this?

 Thank you



Re: [julia-users] In what version is Julia supposed to mature?

2015-07-30 Thread Job van der Zwan
On Wednesday, 29 July 2015 19:00:38 UTC+2, Tony Kelman wrote:

 I guess the waters are a little muddied here lately with Rust having 
 recently put such a big emphasis on stability and reaching 1.0, actively 
 telling people not to use the language prior to that point, and seemingly 
 having really high expectations about how long 1.x will last for. They have 
 a much smaller standard library than we do, but I would think trimming ours 
 down to the bare minimum would be necessary before calling the language 
 1.0. Maybe that could just as well be a 2.0 or 3.0 target instead. 


Go did the same before. I think it's because both position themselves as 
systems languages (with slightly different - but both valid - definitions 
of systems). I don't think the need for stability is quite as important 
for Julia - library maintainers still care of course, but there's not as 
much infrastructure built on top of Julia that depends on guaranteed 
stability.


Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Steven G. Johnson
On Wednesday, July 29, 2015 at 5:47:50 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 00:00:56 UTC+3, Steven G. Johnson wrote:

 Job, I'm basing my judgement on the presentation.


 Ah ok, I was wondering I feel like those presentations give a general 
 impression, but don't really explain the details enough. And like I said, 
 your critique overlaps with Gustafson's own critique of traditional 
 interval arithmetic, so I wasn't sure if you meant that you don't buy his 
 suggested alternative ubox method after reading the book, or indicated 
 scepticism based on earlier experience, but without full knowledge of what 
 his suggested alternative is.


From the presentation, it seemed pretty explicit that the ubox method 
replaces a single interval or pair of intervals with a rapidly expanding 
set of boxes.  I just don't see any conceivable way that this could be 
practical for large-scale problems involving many variables.
 

 Well.. we give up one bit of *precision* in the fraction, but *our set of 
 representations is still the same size*. We still have the same number of 
 floats as before! It's just that half of them is now exact (with one bit 
 less precision), and the other half represents open intervals between these 
 exact numbers. Which lets you represent the entire real number line 
 accurately (but with limited precision, unless they happen to be equal to 
 an exact float). 


Sorry, but that just does not and cannot work.

The problem is that if you interpret an exact unum as the open interval 
between two adjacent exact values, what you have is essentially the same as 
interval arithmetic.  The result of each operation will produce intervals 
that are broader and broader (necessitating lower and lower precision 
unums), with the well known problem that the interval quickly becomes 
absurdly pessimistic in real problems (i.e. you quickly and prematurely 
discard all of your precision in a variable-precision format like unums).

The real problem with interval arithmetic is not open vs. closed intervals, 
it is this growth of the error bounds in realistic computations (due to the 
dependency problem and similar).  (The focus on infinite and semi-infinite 
open intervals is a sideshow.  If you want useful error bounds, the 
important things are the *small* intervals.)

If you discard the interval interpretation with its rapid loss of 
precision, what you are left with is an inexact flag per value, but with no 
useful error bounds.   And I don't believe that this is much more useful 
than a single inexact flag for a set of computations as in IEEE.



Re: [julia-users] In what version is Julia supposed to mature?

2015-07-30 Thread Job van der Zwan
Let's not turn this into a tribal language pissing contest, please.

On Thursday, 30 July 2015 15:15:42 UTC+2, Tony Kelman wrote:

 Hah. Go's definition of systems is totally invalid everywhere in the 
 world except inside Google.

 We also have nicer syntax macros than either of those languages. Compat 
 might start getting pretty ungainly over time, but we can use REQUIRE to 
 deal with that if the version range ever gets too intractable to support 
 everything within the same set of macros.


 On Thursday, July 30, 2015 at 6:08:54 AM UTC-7, Job van der Zwan wrote:

 On Wednesday, 29 July 2015 19:00:38 UTC+2, Tony Kelman wrote:

 I guess the waters are a little muddied here lately with Rust having 
 recently put such a big emphasis on stability and reaching 1.0, actively 
 telling people not to use the language prior to that point, and seemingly 
 having really high expectations about how long 1.x will last for. They have 
 a much smaller standard library than we do, but I would think trimming ours 
 down to the bare minimum would be necessary before calling the language 
 1.0. Maybe that could just as well be a 2.0 or 3.0 target instead. 


 Go did the same before. I think it's because both position themselves as 
 systems languages (with slightly different - but both valid - definitions 
 of systems). I don't think the need for stability is quite as important 
 for Julia - library maintainers still care of course, but there's not as 
 much infrastructure built on top of Julia that depends on guaranteed 
 stability.



Re: [julia-users] Re: John L. Gustafson's UNUMs

2015-07-30 Thread Simon Byrne
My comment was only relating to ordinary floating point, I still don't 
really understand unums.

On Thursday, 30 July 2015 14:47:20 UTC+1, Tom Breloff wrote:

 Simon: if I understand what you're suggesting, you'd like to add a 
 rounding direction flag whenever the ubit is set that would indicate 
 which direction you *would* round if you wanted to?  I like this idea, as 
 it allows you to throw away the implicit open interval in favor of a 
 rounded exact value (if that's what you want).  You potentially get the 
 best of both worlds, but with the speed/memory penalty of setting that 
 extra bit?  I can't really comment yet on how much processing this would 
 add...

 On Thu, Jul 30, 2015 at 9:18 AM, Simon Byrne simon...@gmail.com 
 javascript: wrote:

 On Wednesday, 29 July 2015 22:07:45 UTC+1, Steven G. Johnson wrote:

 And I don't see a clear practical use-case for an inexact bit per value, 
 as opposed to a single inexact flag for a whole set of computations (as in 
 IEEE).


 Probably not quite what others had in mind, but an instruction-specific 
 inexact flag (and rounding mode) would make it possible to implement 
 round-to-odd fairly neatly (e.g., see here 
 http://www.exploringbinary.com/gcc-avoids-double-rounding-errors-with-round-to-odd/),
  
 which would in turn allow implementing all the formatOf operations in the 
 IEE754-2008 standard.




[julia-users] Re: [julia-news] ANN: Testing specific Julia versions on Travis CI

2015-07-30 Thread Stefan Karpinski
Ah, this is great! Thank you, Tony, Pontus and Elliot!!

On Thu, Jul 30, 2015 at 8:20 AM, Tony Kelman t...@kelman.net wrote:

 Hey folks, an announcement for package authors and users who care about
 testing:

 We've had support for Julia package testing on Travis CI
 http://travis-ci.org for almost 9 months now, ref
 https://groups.google.com/forum/#!msg/julia-users/BtCxh4k9hZA/ngUvxdxOxQ8J
 if you missed the original announcement. Up to this point we supported the
 following settings for which Julia version to test against:

 language: julia
 julia:
 - release
 - nightly

 Release has meant the latest release version in the 0.3.x series, and
 nightly has meant the latest nightly build of 0.4-dev master. Once Julia
 0.4.0 gets released, the meaning of these settings will change, where
 release will be the latest version in the 0.4.x series, and nightly will be
 the latest nightly build of 0.5-dev master. Considering the wide install
 base and number of packages that may want to continue supporting 0.3 even
 after 0.4.0 gets released, we've just added support for additional version
 options in your .travis.yml file. You can now do

 julia:
 - release
 - nightly
 - 0.3

 Or, if you want to test with specific point releases, you can do that too
 (there should not usually be much need for this, but it could be useful
 once in a while to compare different point releases):

 julia:
 - release
 - nightly
 - 0.3
 - 0.3.10

 The oldest point release for which we have generic Linux binaries
 available is 0.3.1. If you enable multi-os support for your repository (see
 http://docs.travis-ci.com/user/multi-os/), then you can go back as far as
 0.2.0 on OS X. Note that you'd need to replace the default test script with
 the old-fashioned `julia test/runtests.jl` since `Pkg.test` and
 `--check-bounds=yes` are not supported on Julia version 0.2.x. The
 downloads of those versions would fail on Linux workers so you may need to
 set up a build matrix with excluded jobs (see
 http://docs.travis-ci.com/user/customizing-the-build/#Build-Matrix).

 Let us know if you have any questions or issues.

 Happy testing,
 Tony (with thanks to @ninjin and @staticfloat for PR review)

  --
 You received this message because you are subscribed to the Google Groups
 julia-news group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to julia-news+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/julia-news/5640297a-6843-43ab-8485-8ccf9cf9fe3e%40googlegroups.com
 https://groups.google.com/d/msgid/julia-news/5640297a-6843-43ab-8485-8ccf9cf9fe3e%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.



[julia-users] Re: ANN: Testing specific Julia versions on Travis CI

2015-07-30 Thread Tony Kelman
Oh, and just so people are clear, you don't need to touch the default test 
script at all unless you want to be adventurous and try testing against 
0.2.x on OS X. For 0.3, 0.3.x, and future versions the default test script 
should be fine.


On Thursday, July 30, 2015 at 5:20:06 AM UTC-7, Tony Kelman wrote:

 Hey folks, an announcement for package authors and users who care about 
 testing:

 We've had support for Julia package testing on Travis CI 
 http://travis-ci.org for almost 9 months now, ref 
 https://groups.google.com/forum/#!msg/julia-users/BtCxh4k9hZA/ngUvxdxOxQ8J 
 if you missed the original announcement. Up to this point we supported the 
 following settings for which Julia version to test against:

 language: julia
 julia:
 - release
 - nightly

 Release has meant the latest release version in the 0.3.x series, and 
 nightly has meant the latest nightly build of 0.4-dev master. Once Julia 
 0.4.0 gets released, the meaning of these settings will change, where 
 release will be the latest version in the 0.4.x series, and nightly will be 
 the latest nightly build of 0.5-dev master. Considering the wide install 
 base and number of packages that may want to continue supporting 0.3 even 
 after 0.4.0 gets released, we've just added support for additional version 
 options in your .travis.yml file. You can now do

 julia: 
 - release
 - nightly
 - 0.3

 Or, if you want to test with specific point releases, you can do that too 
 (there should not usually be much need for this, but it could be useful 
 once in a while to compare different point releases):

 julia: 
 - release
 - nightly
 - 0.3
 - 0.3.10

 The oldest point release for which we have generic Linux binaries 
 available is 0.3.1. If you enable multi-os support for your repository (see 
 http://docs.travis-ci.com/user/multi-os/), then you can go back as far as 
 0.2.0 on OS X. Note that you'd need to replace the default test script with 
 the old-fashioned `julia test/runtests.jl` since `Pkg.test` and 
 `--check-bounds=yes` are not supported on Julia version 0.2.x. The 
 downloads of those versions would fail on Linux workers so you may need to 
 set up a build matrix with excluded jobs (see 
 http://docs.travis-ci.com/user/customizing-the-build/#Build-Matrix).

 Let us know if you have any questions or issues.

 Happy testing,
 Tony (with thanks to @ninjin and @staticfloat for PR review)



[julia-users] Re: John L. Gustafson's UNUMs

2015-07-30 Thread Simon Byrne
On Wednesday, 29 July 2015 22:07:45 UTC+1, Steven G. Johnson wrote:

 And I don't see a clear practical use-case for an inexact bit per value, 
 as opposed to a single inexact flag for a whole set of computations (as in 
 IEEE).


Probably not quite what others had in mind, but an instruction-specific 
inexact flag (and rounding mode) would make it possible to implement 
round-to-odd fairly neatly (e.g., see here 
http://www.exploringbinary.com/gcc-avoids-double-rounding-errors-with-round-to-odd/),
 
which would in turn allow implementing all the formatOf operations in the 
IEE754-2008 standard.


Re: [julia-users] Re: John L. Gustafson's UNUMs

2015-07-30 Thread Tom Breloff
Simon: if I understand what you're suggesting, you'd like to add a
rounding direction flag whenever the ubit is set that would indicate
which direction you *would* round if you wanted to?  I like this idea, as
it allows you to throw away the implicit open interval in favor of a
rounded exact value (if that's what you want).  You potentially get the
best of both worlds, but with the speed/memory penalty of setting that
extra bit?  I can't really comment yet on how much processing this would
add...

On Thu, Jul 30, 2015 at 9:18 AM, Simon Byrne simonby...@gmail.com wrote:

 On Wednesday, 29 July 2015 22:07:45 UTC+1, Steven G. Johnson wrote:

 And I don't see a clear practical use-case for an inexact bit per value,
 as opposed to a single inexact flag for a whole set of computations (as in
 IEEE).


 Probably not quite what others had in mind, but an instruction-specific
 inexact flag (and rounding mode) would make it possible to implement
 round-to-odd fairly neatly (e.g., see here
 http://www.exploringbinary.com/gcc-avoids-double-rounding-errors-with-round-to-odd/),
 which would in turn allow implementing all the formatOf operations in the
 IEE754-2008 standard.



[julia-users] ANN: Testing specific Julia versions on Travis CI

2015-07-30 Thread Tony Kelman
Hey folks, an announcement for package authors and users who care about 
testing:

We've had support for Julia package testing on Travis CI 
http://travis-ci.org for almost 9 months now, 
ref https://groups.google.com/forum/#!msg/julia-users/BtCxh4k9hZA/ngUvxdxOxQ8J 
if you missed the original announcement. Up to this point we supported the 
following settings for which Julia version to test against:

language: julia
julia:
- release
- nightly

Release has meant the latest release version in the 0.3.x series, and 
nightly has meant the latest nightly build of 0.4-dev master. Once Julia 
0.4.0 gets released, the meaning of these settings will change, where 
release will be the latest version in the 0.4.x series, and nightly will be 
the latest nightly build of 0.5-dev master. Considering the wide install 
base and number of packages that may want to continue supporting 0.3 even 
after 0.4.0 gets released, we've just added support for additional version 
options in your .travis.yml file. You can now do

julia: 
- release
- nightly
- 0.3

Or, if you want to test with specific point releases, you can do that too 
(there should not usually be much need for this, but it could be useful 
once in a while to compare different point releases):

julia: 
- release
- nightly
- 0.3
- 0.3.10

The oldest point release for which we have generic Linux binaries available 
is 0.3.1. If you enable multi-os support for your repository 
(see http://docs.travis-ci.com/user/multi-os/), then you can go back as far 
as 0.2.0 on OS X. Note that you'd need to replace the default test script 
with the old-fashioned `julia test/runtests.jl` since `Pkg.test` and 
`--check-bounds=yes` are not supported on Julia version 0.2.x. The 
downloads of those versions would fail on Linux workers so you may need to 
set up a build matrix with excluded jobs 
(see http://docs.travis-ci.com/user/customizing-the-build/#Build-Matrix).

Let us know if you have any questions or issues.

Happy testing,
Tony (with thanks to @ninjin and @staticfloat for PR review)



[julia-users] Re: from my ip, julia-dev has been stuck all day, displays Loading... overlaid lower right

2015-07-30 Thread Jeffrey Sarnoff
fixed, thank you

On Thursday, July 23, 2015 at 6:45:11 PM UTC-4, Jeffrey Sarnoff wrote:

 The Fix (recommended to us by a reliable google groups contributor):
  
 There is an Bug that is causing that they can disable the tags till they 
 get the Tags bug fixed.


 On Thursday, July 23, 2015 at 6:13:42 PM UTC-4, Jeffrey Sarnoff wrote:

 Regards

 On Thursday, July 23, 2015 at 6:10:11 PM UTC-4, Avik Sengupta wrote:

 Ah, well, we seem to be having issues with case sensitivity in many 
 different places!

 There is Julia and julia, as tags, but i find it hard to believe 
 that is new. (This has stopped working for me from 21 July). An admin 
 probably needs to prune the tags. 

 Regards
 -
 Avik

 On Thursday, 23 July 2015 23:02:58 UTC+1, Jeffrey Sarnoff wrote:

 I posted a request for help with google, and they responded :

 Apparently some action on julia-dev introduced a duplicate tag, a 
 second tag that differs only in that it is [un]capitalized while the 
 original tag is not.
 Removing all occurances of the mispresented tag should resolve this 
 problem.  [here are the links:] 
 my post with response 
 https://productforums.google.com/forum/#!topic/apps/CeCURqUKQ-8;context-place=topicsearchin/apps/category$3Agoogle-groups-for-business%7Csort:relevance%7Cspell:false
 the posts that the response references 
 https://productforums.google.com/forum/#!msg/apps/4pfJ6fdnAwM/RPTkQWSCgkkJ



 On Thursday, July 23, 2015 at 5:19:08 PM UTC-4, Avik Sengupta wrote:

 Same here. 

 On Thursday, 23 July 2015 20:45:30 UTC+1, Jeffrey Sarnoff wrote:

 ?



[julia-users] Re: Recommended way of implementing getters and setters

2015-07-30 Thread j verzani
That's maybe not the best package to look at for examples. It was written 
quite awhile ago (and doesn't get any use as far as I can tell). The 
`get_value` is generic in the sense that the main property for 
different widgets might have different property names and this function 
would just look it up based on the type of the widget.

As for your original question, the use of indexing by symbols is inherited 
from PyCall, and is only there because the dot isn't available. I don't 
think it makes a good interface. It is awkward to type for starters. The 
advice to use Julia's generic concepts, as possible, is a good one. An 
example there would be the interface for Gtk through Gtk.jl.

On Thursday, July 30, 2015 at 10:50:02 AM UTC-4, Adriano Vilela Barbosa 
wrote:

 Thanks for your answer.

 Before posting my original question, I took a look at some packages listed 
 in

 http://pkg.julialang.org/

 just to see what people were doing. I didn't notice much of a pattern, and 
 that's why I decided to ask here. 

 For example, the package PySide (https://github.com/jverzani/PySide.jl) 
 offers both interfaces, so that it's possible to do either

 w = Qt.QWidget()# constructors
 w[:setWindowTitle](Hello world example) # w.setWindowTitle() is 
 w[:setWindowTitle] in PyCall

 or

 w = Widget()
 setWindowTitle(w, Hello world example (redux)) # methodName(object, 
 args...)

 At the end of that page, the author talks about generic methods such as 
 get_value() and set_value(), which makes me wonder if he's thinking of 
 things like

 get_value(obj,property_name)

 and

 set_value(obj,property_name,property_value)

 Maybe in the end it's just a matter of personal preference, at least for 
 now.

 Thanks a lot.

 Adriano



[julia-users] Re: Recommended way of implementing getters and setters

2015-07-30 Thread Adriano Vilela Barbosa
Thanks for your answer.

Before posting my original question, I took a look at some packages listed 
in

http://pkg.julialang.org/

just to see what people were doing. I didn't notice much of a pattern, and 
that's why I decided to ask here. 

For example, the package PySide (https://github.com/jverzani/PySide.jl) 
offers both interfaces, so that it's possible to do either

w = Qt.QWidget()# constructors
w[:setWindowTitle](Hello world example) # w.setWindowTitle() is 
w[:setWindowTitle] in PyCall

or

w = Widget()
setWindowTitle(w, Hello world example (redux)) # methodName(object, 
args...)

At the end of that page, the author talks about generic methods such as 
get_value() and set_value(), which makes me wonder if he's thinking of 
things like

get_value(obj,property_name)

and

set_value(obj,property_name,property_value)

Maybe in the end it's just a matter of personal preference, at least for 
now.

Thanks a lot.

Adriano



Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Stefan Karpinski
This doesn't seem any better than try the computation with Float128s.

On Thu, Jul 30, 2015 at 10:27 AM, Tom Breloff t...@breloff.com wrote:

 Steven: There is a section in the book dedicated to writing dynamically
 scaling precision/accuracy into your algorithms.  The idea is this:

 - Pick a small format unum at the start of your algorithm.
 - During the algorithm, check your unums for insufficient
 precision/accuracy in the final interval.
 - As soon as you discover the intervals getting too large, restart with a
 new unum environment.

 Obviously this type of resetting shouldn't be default behavior, but the
 point is that you have as much flexibility as you need to precisely define
 the level of detail that you care about, and there is sufficient
 information in your result that you can re-run with better settings if the
 result is unsatisfactory.

 The problem with floats is that you can get the result of a black-box
 calculation and have NO IDEA how wrong you are... only that your solution
 is not exact.  This concept should make you skeptical of every float
 calculation that results with the inexact flag being set.


 On Thu, Jul 30, 2015 at 10:07 AM, Steven G. Johnson stevenj@gmail.com
  wrote:

 On Wednesday, July 29, 2015 at 5:47:50 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 00:00:56 UTC+3, Steven G. Johnson wrote:

 Job, I'm basing my judgement on the presentation.


 Ah ok, I was wondering I feel like those presentations give a general
 impression, but don't really explain the details enough. And like I said,
 your critique overlaps with Gustafson's own critique of traditional
 interval arithmetic, so I wasn't sure if you meant that you don't buy his
 suggested alternative ubox method after reading the book, or indicated
 scepticism based on earlier experience, but without full knowledge of what
 his suggested alternative is.


 From the presentation, it seemed pretty explicit that the ubox method
 replaces a single interval or pair of intervals with a rapidly expanding
 set of boxes.  I just don't see any conceivable way that this could be
 practical for large-scale problems involving many variables.


 Well.. we give up one bit of *precision* in the fraction, but *our set
 of representations is still the same size*. We still have the same
 number of floats as before! It's just that half of them is now exact (with
 one bit less precision), and the other half represents open intervals
 between these exact numbers. Which lets you represent the entire real
 number line accurately (but with limited precision, unless they happen to
 be equal to an exact float).


 Sorry, but that just does not and cannot work.

 The problem is that if you interpret an exact unum as the open interval
 between two adjacent exact values, what you have is essentially the same as
 interval arithmetic.  The result of each operation will produce intervals
 that are broader and broader (necessitating lower and lower precision
 unums), with the well known problem that the interval quickly becomes
 absurdly pessimistic in real problems (i.e. you quickly and prematurely
 discard all of your precision in a variable-precision format like unums).

 The real problem with interval arithmetic is not open vs. closed
 intervals, it is this growth of the error bounds in realistic computations
 (due to the dependency problem and similar).  (The focus on infinite and
 semi-infinite open intervals is a sideshow.  If you want useful error
 bounds, the important things are the *small* intervals.)

 If you discard the interval interpretation with its rapid loss of
 precision, what you are left with is an inexact flag per value, but with no
 useful error bounds.   And I don't believe that this is much more useful
 than a single inexact flag for a set of computations as in IEEE.





Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Tom Breloff
It's better in the sense that you have a reason to try it with a larger
type.  You know exactly how much precision you've lost, and so you can
decide to use up to 1024 bits for intermediate calculations if you need
to.  If sqrt(2) is part of your calculation, the inexact field for floats
will be set no matter the calculation, and you only know that my answer is
always wrong.  I wouldn't exactly call this a useful/actionable statement.

On Thu, Jul 30, 2015 at 10:58 AM, Stefan Karpinski ste...@karpinski.org
wrote:

 This doesn't seem any better than try the computation with Float128s.

 On Thu, Jul 30, 2015 at 10:27 AM, Tom Breloff t...@breloff.com wrote:

 Steven: There is a section in the book dedicated to writing dynamically
 scaling precision/accuracy into your algorithms.  The idea is this:

 - Pick a small format unum at the start of your algorithm.
 - During the algorithm, check your unums for insufficient
 precision/accuracy in the final interval.
 - As soon as you discover the intervals getting too large, restart with a
 new unum environment.

 Obviously this type of resetting shouldn't be default behavior, but the
 point is that you have as much flexibility as you need to precisely define
 the level of detail that you care about, and there is sufficient
 information in your result that you can re-run with better settings if the
 result is unsatisfactory.

 The problem with floats is that you can get the result of a black-box
 calculation and have NO IDEA how wrong you are... only that your solution
 is not exact.  This concept should make you skeptical of every float
 calculation that results with the inexact flag being set.


 On Thu, Jul 30, 2015 at 10:07 AM, Steven G. Johnson 
 stevenj@gmail.com wrote:

 On Wednesday, July 29, 2015 at 5:47:50 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 00:00:56 UTC+3, Steven G. Johnson wrote:

 Job, I'm basing my judgement on the presentation.


 Ah ok, I was wondering I feel like those presentations give a general
 impression, but don't really explain the details enough. And like I said,
 your critique overlaps with Gustafson's own critique of traditional
 interval arithmetic, so I wasn't sure if you meant that you don't buy his
 suggested alternative ubox method after reading the book, or indicated
 scepticism based on earlier experience, but without full knowledge of what
 his suggested alternative is.


 From the presentation, it seemed pretty explicit that the ubox method
 replaces a single interval or pair of intervals with a rapidly expanding
 set of boxes.  I just don't see any conceivable way that this could be
 practical for large-scale problems involving many variables.


 Well.. we give up one bit of *precision* in the fraction, but *our set
 of representations is still the same size*. We still have the same
 number of floats as before! It's just that half of them is now exact (with
 one bit less precision), and the other half represents open intervals
 between these exact numbers. Which lets you represent the entire real
 number line accurately (but with limited precision, unless they happen to
 be equal to an exact float).


 Sorry, but that just does not and cannot work.

 The problem is that if you interpret an exact unum as the open interval
 between two adjacent exact values, what you have is essentially the same as
 interval arithmetic.  The result of each operation will produce intervals
 that are broader and broader (necessitating lower and lower precision
 unums), with the well known problem that the interval quickly becomes
 absurdly pessimistic in real problems (i.e. you quickly and prematurely
 discard all of your precision in a variable-precision format like unums).

 The real problem with interval arithmetic is not open vs. closed
 intervals, it is this growth of the error bounds in realistic computations
 (due to the dependency problem and similar).  (The focus on infinite and
 semi-infinite open intervals is a sideshow.  If you want useful error
 bounds, the important things are the *small* intervals.)

 If you discard the interval interpretation with its rapid loss of
 precision, what you are left with is an inexact flag per value, but with no
 useful error bounds.   And I don't believe that this is much more useful
 than a single inexact flag for a set of computations as in IEEE.






Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Tom Breloff
Steven: There is a section in the book dedicated to writing dynamically
scaling precision/accuracy into your algorithms.  The idea is this:

- Pick a small format unum at the start of your algorithm.
- During the algorithm, check your unums for insufficient
precision/accuracy in the final interval.
- As soon as you discover the intervals getting too large, restart with a
new unum environment.

Obviously this type of resetting shouldn't be default behavior, but the
point is that you have as much flexibility as you need to precisely define
the level of detail that you care about, and there is sufficient
information in your result that you can re-run with better settings if the
result is unsatisfactory.

The problem with floats is that you can get the result of a black-box
calculation and have NO IDEA how wrong you are... only that your solution
is not exact.  This concept should make you skeptical of every float
calculation that results with the inexact flag being set.


On Thu, Jul 30, 2015 at 10:07 AM, Steven G. Johnson stevenj@gmail.com
wrote:

 On Wednesday, July 29, 2015 at 5:47:50 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 00:00:56 UTC+3, Steven G. Johnson wrote:

 Job, I'm basing my judgement on the presentation.


 Ah ok, I was wondering I feel like those presentations give a general
 impression, but don't really explain the details enough. And like I said,
 your critique overlaps with Gustafson's own critique of traditional
 interval arithmetic, so I wasn't sure if you meant that you don't buy his
 suggested alternative ubox method after reading the book, or indicated
 scepticism based on earlier experience, but without full knowledge of what
 his suggested alternative is.


 From the presentation, it seemed pretty explicit that the ubox method
 replaces a single interval or pair of intervals with a rapidly expanding
 set of boxes.  I just don't see any conceivable way that this could be
 practical for large-scale problems involving many variables.


 Well.. we give up one bit of *precision* in the fraction, but *our set
 of representations is still the same size*. We still have the same
 number of floats as before! It's just that half of them is now exact (with
 one bit less precision), and the other half represents open intervals
 between these exact numbers. Which lets you represent the entire real
 number line accurately (but with limited precision, unless they happen to
 be equal to an exact float).


 Sorry, but that just does not and cannot work.

 The problem is that if you interpret an exact unum as the open interval
 between two adjacent exact values, what you have is essentially the same as
 interval arithmetic.  The result of each operation will produce intervals
 that are broader and broader (necessitating lower and lower precision
 unums), with the well known problem that the interval quickly becomes
 absurdly pessimistic in real problems (i.e. you quickly and prematurely
 discard all of your precision in a variable-precision format like unums).

 The real problem with interval arithmetic is not open vs. closed
 intervals, it is this growth of the error bounds in realistic computations
 (due to the dependency problem and similar).  (The focus on infinite and
 semi-infinite open intervals is a sideshow.  If you want useful error
 bounds, the important things are the *small* intervals.)

 If you discard the interval interpretation with its rapid loss of
 precision, what you are left with is an inexact flag per value, but with no
 useful error bounds.   And I don't believe that this is much more useful
 than a single inexact flag for a set of computations as in IEEE.




Re: [julia-users] Re: ANN: Testing specific Julia versions on Travis CI

2015-07-30 Thread Kristoffer Carlsson
How are we doing on the software side for performance tracking? Any 
concrete plan yet?

On Thursday, July 30, 2015 at 6:16:07 PM UTC+2, Stefan Karpinski wrote:

 Hardware for automated performance tracking has been ordered and should 
 arrive next month.

 On Thu, Jul 30, 2015 at 12:13 PM, Michael Prentiss mcpre...@gmail.com 
 javascript: wrote:

 This is great progress. 

 Similarly, is there a way for benchmarking on different versions of the 
 code?
 Automating this will be very helpful.





[julia-users] Re: Recommended way of implementing getters and setters

2015-07-30 Thread Adriano Vilela Barbosa
I know exactly what you're talking about; I'm having the same problem.

I thought that maybe I shouldn't use a variable n_samples in the first 
place and simply do n_samples(time_signal) whenever I need to query the 
number of samples in the signal. However, this is annoying, and sometimes 
we do want to store the returned value in some other variable (maybe, for 
example, because the getter method has to do some expensive computation and 
we don't want to run it every time we query the property).

What are you doing in your case? Maybe adding a get prefix to your getter 
methods?

Adriano


On Thursday, July 30, 2015 at 5:25:37 PM UTC-3, Kristoffer Carlsson wrote:

 When I name my getters like that I always run into the problem that I want 
 to do:

 n_samples = n_samples(time_signal).

 On Thursday, July 30, 2015 at 10:13:02 PM UTC+2, Adriano Vilela Barbosa 
 wrote:

 I see. I looked at PySide.jl because I use Qt quite a bit in Python 
 (though, for historical reasons, I use PyQt instead of PySide).

 I had a look at Gtk.jl and noticed that they use getproperty() and 
 setproperty!() a lot. For example:

 setproperty!(win, :title, New title)

 getproperty(win, :title, String)

 I guess this makes sense for a GUI toolkit where objects have lots of 
 properties. In my case, I think it makes more sense to do things like (I 
 dropped the get_ prefix from the getter methods)

 n_samples(time_signal)
 time_vector(time_signal)

 instead of

 get(time_signal,:n_samples)
 get(time_signal,:time_vector)

 Hopefully, I'm in the right direction here and using Julia's generic 
 concepts. Not sure exactly what you mean by that...

 Thanks a lot,

 Adriano

 On Thursday, July 30, 2015 at 12:34:58 PM UTC-3, j verzani wrote:

 That's maybe not the best package to look at for examples. It was 
 written quite awhile ago (and doesn't get any use as far as I can tell). 
 The `get_value` is generic in the sense that the main property for 
 different widgets might have different property names and this function 
 would just look it up based on the type of the widget.

 As for your original question, the use of indexing by symbols is 
 inherited from PyCall, and is only there because the dot isn't available. 
 I don't think it makes a good interface. It is awkward to type for 
 starters. The advice to use Julia's generic concepts, as possible, is a 
 good one. An example there would be the interface for Gtk through Gtk.jl.

 On Thursday, July 30, 2015 at 10:50:02 AM UTC-4, Adriano Vilela Barbosa 
 wrote:

 Thanks for your answer.

 Before posting my original question, I took a look at some packages 
 listed in

 http://pkg.julialang.org/

 just to see what people were doing. I didn't notice much of a pattern, 
 and that's why I decided to ask here. 

 For example, the package PySide (https://github.com/jverzani/PySide.jl) 
 offers both interfaces, so that it's possible to do either

 w = Qt.QWidget()# constructors
 w[:setWindowTitle](Hello world example) # w.setWindowTitle() is 
 w[:setWindowTitle] in PyCall

 or

 w = Widget()
 setWindowTitle(w, Hello world example (redux)) # methodName(object, 
 args...)

 At the end of that page, the author talks about generic methods such 
 as get_value() and set_value(), which makes me wonder if he's thinking of 
 things like

 get_value(obj,property_name)

 and

 set_value(obj,property_name,property_value)

 Maybe in the end it's just a matter of personal preference, at least 
 for now.

 Thanks a lot.

 Adriano



Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Jason Merrill
On Thursday, July 30, 2015 at 3:10:24 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 16:07:46 UTC+2, Steven G. Johnson wrote:

 The problem is that if you interpret an exact unum as the open interval 
 between two adjacent exact values, what you have is essentially the same as 
 interval arithmetic.  The result of each operation will produce intervals 
 that are broader and broader (necessitating lower and lower precision 
 unums), with the well known problem that the interval quickly becomes 
 absurdly pessimistic in real problems (i.e. you quickly and prematurely 
 discard all of your precision in a variable-precision format like unums).

 The real problem with interval arithmetic is not open vs. closed 
 intervals, it is this growth of the error bounds in realistic computations 
 (due to the dependency problem and similar).  (The focus on infinite and 
 semi-infinite open intervals is a sideshow.  If you want useful error 
 bounds, the important things are the *small* intervals.)

 If you discard the interval interpretation with its rapid loss of 
 precision, what you are left with is an inexact flag per value, but with no 
 useful error bounds. And I don't believe that this is much more useful than 
 a single inexact flag for a set of computations as in IEEE.


 The thing is, these are *exactly *the criticisms Gustafson has of 
 traditional interval arithmetic. In fact, he's even more critical of 
 interval arithmetic than he is of floats, as far as I can see. However, he 
 claims that ubounds don't share the absurd pessimism problem. Supposedly, 
 traditional interval arithmetic by necessity needs to be more pessimistic 
 about its boundaries due to rounding, and only using closed endpoint 
 instead of allowing for open intervals. Unums instead are (supposedly) more 
 precise about the information loss they have, and thus (supposedly) don't 
 blow up as badly. Again, his claims, not mine. I'm not saying you're wrong, 
 or even sure if you disagree as much as you might think you are (although 
 I'm pretty sure you wouldn't like the tone he uses when describing 
 traditional methods though).

 I agree with the others about the grain of salt (unums/ubounds/uboxes *always 
 *come out on top in his examples, which does make you wonder), but on the 
 other hand: given that the mathematica implementation of his methods are 
 open source, his claims *should* be verifiable (they can be found here 
 under Downloads/Updates 
 https://www.google.com/url?q=https%3A%2F%2Fwww.crcpress.com%2FThe-End-of-Error-Unum-Computing%2FGustafson%2F9781482239867sa=Dsntz=1usg=AFQjCNG9ezAr5A_BTmpUT6WdVBIYDvaIhA,
  
 Simon Byrne linked it earlier. I also found a Python port 
 https://github.com/jrmuizel/pyunum).


If you inspect the specific examples of challenge problems that Gustafson 
gives in Chapter 14 of his book, the open vs. closed interval distinction 
doesn't actually make an important appearance. The main ways that ubounds 
do better than Mathematica's Intervals are

1. Fused operations allow getting around specific cases of the dependency 
problem. E.g. using squareu[x] instead of x*x allows putting 0 as a lower 
bound of the result, and fdotu allows avoiding cancelation in dot products 
(and as a special case, sums and differences).
2. Sometimes the unums are allowed to have more bits in their mantissa than 
the bounds of the float Intervals.
3. The ubounds often use fewer bits in their representation (averaged over 
a whole calculation) than Interval alternatives.

Only the first two are relevant to correctness/precision. Number 3 is a 
performance issue that I don't want to discuss.

Looking at the specific examples,

* Wrath of Kahan, 1
Intervals eventually blow up to [-Inf, Inf]. Ubounds also diverge (from the 
text, If you keep going, the left endpoint falls below 6, and then 
diverges towards -Inf, but remember that a unum environment can 
automatically detect when relative width gets to high...). They diverge 
slightly more slowly in this case because he has allowed 2^6=64 bits in the 
mantissa of the ubounds endpoints, whereas the double Interval bounds have 
only 52 bits in their mantissa.

* Wrath of Kahan, 2
In the unum computation, squaring operations are fused with squareu, but 
float Interval calculations are not fused. I also believe the check for the 
ubounds containing zero in e[z] is erroneous in a way that makes this 
example very deceiving, but I don't want to go into that detail here.

* Rump's royal pain
Uses fused powu and squareu operations, and allows up to 2^7=128 bits in 
the mantissa for ubounds, whereas double intervals are computed without 
fused operations and only allow 52 bits in their mantissa. The fused 
operations are critical here.

* Quadratic formula
Main advantage comes from using fused squareu operations, and allowing more 
bits in the mantissa of the unums than in the mantissa of the single 
precision floats. No comparison to float Intervals here.

* Bailey's 

[julia-users] Re: Recommended way of implementing getters and setters

2015-07-30 Thread Kristoffer Carlsson
When I name my getters like that I always run into the problem that I want 
to do:

n_samples = n_samples(time_signal).

On Thursday, July 30, 2015 at 10:13:02 PM UTC+2, Adriano Vilela Barbosa 
wrote:

 I see. I looked at PySide.jl because I use Qt quite a bit in Python 
 (though, for historical reasons, I use PyQt instead of PySide).

 I had a look at Gtk.jl and noticed that they use getproperty() and 
 setproperty!() a lot. For example:

 setproperty!(win, :title, New title)

 getproperty(win, :title, String)

 I guess this makes sense for a GUI toolkit where objects have lots of 
 properties. In my case, I think it makes more sense to do things like (I 
 dropped the get_ prefix from the getter methods)

 n_samples(time_signal)
 time_vector(time_signal)

 instead of

 get(time_signal,:n_samples)
 get(time_signal,:time_vector)

 Hopefully, I'm in the right direction here and using Julia's generic 
 concepts. Not sure exactly what you mean by that...

 Thanks a lot,

 Adriano

 On Thursday, July 30, 2015 at 12:34:58 PM UTC-3, j verzani wrote:

 That's maybe not the best package to look at for examples. It was written 
 quite awhile ago (and doesn't get any use as far as I can tell). The 
 `get_value` is generic in the sense that the main property for 
 different widgets might have different property names and this function 
 would just look it up based on the type of the widget.

 As for your original question, the use of indexing by symbols is 
 inherited from PyCall, and is only there because the dot isn't available. 
 I don't think it makes a good interface. It is awkward to type for 
 starters. The advice to use Julia's generic concepts, as possible, is a 
 good one. An example there would be the interface for Gtk through Gtk.jl.

 On Thursday, July 30, 2015 at 10:50:02 AM UTC-4, Adriano Vilela Barbosa 
 wrote:

 Thanks for your answer.

 Before posting my original question, I took a look at some packages 
 listed in

 http://pkg.julialang.org/

 just to see what people were doing. I didn't notice much of a pattern, 
 and that's why I decided to ask here. 

 For example, the package PySide (https://github.com/jverzani/PySide.jl) 
 offers both interfaces, so that it's possible to do either

 w = Qt.QWidget()# constructors
 w[:setWindowTitle](Hello world example) # w.setWindowTitle() is 
 w[:setWindowTitle] in PyCall

 or

 w = Widget()
 setWindowTitle(w, Hello world example (redux)) # methodName(object, 
 args...)

 At the end of that page, the author talks about generic methods such 
 as get_value() and set_value(), which makes me wonder if he's thinking of 
 things like

 get_value(obj,property_name)

 and

 set_value(obj,property_name,property_value)

 Maybe in the end it's just a matter of personal preference, at least for 
 now.

 Thanks a lot.

 Adriano



[julia-users] Re: Changing Path to .Julia folder

2015-07-30 Thread dworkg1
To further clarify the question, . Julia packages are installed in ~/.julia 
but  ~ has changed directories. How do I change the directory where  ~ points 
to?

On Thursday, July 30, 2015 at 3:49:43 PM UTC-4, dwo...@gmail.com wrote:

 I was in the process of installing cmake and visual studio and somehow my 
 default Julia package and history folder changed from C:\Users\Me 
 to C:\\work\\home\\.julia\\v0.3. How can I change it back? My .julia 
 folder and .julia_history are all in  C:\Users\Me  but Pkg.dir() 
 redirects to the new folder right now.



Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Stefan Karpinski
It seems like you could apply all of these tricks to intervals to the same
effect, and it would still be faster since Float64 ops are implemented in
hardware. For example, given an Interval type, you can define
square(::Interval) so that the lower bound is 0; you can also define
dot(::Vector{Interval}, ::Vector{Interval}) cleverly, etc. (In fact, these
things seem better done at the language level than at the hardware level.
Moreover, none of this really addresses the fundamental issue – there is no
systematic solution to the divergence problem is provided, just a
collection of hacks to make slightly less bad in certain special
circumstances.

On Thu, Jul 30, 2015 at 3:54 PM, Jason Merrill jwmerr...@gmail.com wrote:

 On Thursday, July 30, 2015 at 3:10:24 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 16:07:46 UTC+2, Steven G. Johnson wrote:

 The problem is that if you interpret an exact unum as the open interval
 between two adjacent exact values, what you have is essentially the same as
 interval arithmetic.  The result of each operation will produce intervals
 that are broader and broader (necessitating lower and lower precision
 unums), with the well known problem that the interval quickly becomes
 absurdly pessimistic in real problems (i.e. you quickly and prematurely
 discard all of your precision in a variable-precision format like unums).

 The real problem with interval arithmetic is not open vs. closed
 intervals, it is this growth of the error bounds in realistic computations
 (due to the dependency problem and similar).  (The focus on infinite and
 semi-infinite open intervals is a sideshow.  If you want useful error
 bounds, the important things are the *small* intervals.)

 If you discard the interval interpretation with its rapid loss of
 precision, what you are left with is an inexact flag per value, but with no
 useful error bounds. And I don't believe that this is much more useful than
 a single inexact flag for a set of computations as in IEEE.


 The thing is, these are *exactly *the criticisms Gustafson has of
 traditional interval arithmetic. In fact, he's even more critical of
 interval arithmetic than he is of floats, as far as I can see. However, he
 claims that ubounds don't share the absurd pessimism problem. Supposedly,
 traditional interval arithmetic by necessity needs to be more pessimistic
 about its boundaries due to rounding, and only using closed endpoint
 instead of allowing for open intervals. Unums instead are (supposedly) more
 precise about the information loss they have, and thus (supposedly) don't
 blow up as badly. Again, his claims, not mine. I'm not saying you're wrong,
 or even sure if you disagree as much as you might think you are (although
 I'm pretty sure you wouldn't like the tone he uses when describing
 traditional methods though).

 I agree with the others about the grain of salt (unums/ubounds/uboxes *always
 *come out on top in his examples, which does make you wonder), but on
 the other hand: given that the mathematica implementation of his methods
 are open source, his claims *should* be verifiable (they can be found here
 under Downloads/Updates
 https://www.google.com/url?q=https%3A%2F%2Fwww.crcpress.com%2FThe-End-of-Error-Unum-Computing%2FGustafson%2F9781482239867sa=Dsntz=1usg=AFQjCNG9ezAr5A_BTmpUT6WdVBIYDvaIhA,
 Simon Byrne linked it earlier. I also found a Python port
 https://github.com/jrmuizel/pyunum).


 If you inspect the specific examples of challenge problems that Gustafson
 gives in Chapter 14 of his book, the open vs. closed interval distinction
 doesn't actually make an important appearance. The main ways that ubounds
 do better than Mathematica's Intervals are

 1. Fused operations allow getting around specific cases of the dependency
 problem. E.g. using squareu[x] instead of x*x allows putting 0 as a lower
 bound of the result, and fdotu allows avoiding cancelation in dot products
 (and as a special case, sums and differences).
 2. Sometimes the unums are allowed to have more bits in their mantissa
 than the bounds of the float Intervals.
 3. The ubounds often use fewer bits in their representation (averaged over
 a whole calculation) than Interval alternatives.

 Only the first two are relevant to correctness/precision. Number 3 is a
 performance issue that I don't want to discuss.

 Looking at the specific examples,

 * Wrath of Kahan, 1
 Intervals eventually blow up to [-Inf, Inf]. Ubounds also diverge (from
 the text, If you keep going, the left endpoint falls below 6, and then
 diverges towards -Inf, but remember that a unum environment can
 automatically detect when relative width gets to high...). They diverge
 slightly more slowly in this case because he has allowed 2^6=64 bits in the
 mantissa of the ubounds endpoints, whereas the double Interval bounds have
 only 52 bits in their mantissa.

 * Wrath of Kahan, 2
 In the unum computation, squaring operations are fused with squareu, but
 float Interval 

[julia-users] Re: Recommended way of implementing getters and setters

2015-07-30 Thread Adriano Vilela Barbosa
I see. I looked at PySide.jl because I use Qt quite a bit in Python 
(though, for historical reasons, I use PyQt instead of PySide).

I had a look at Gtk.jl and noticed that they use getproperty() and 
setproperty!() a lot. For example:

setproperty!(win, :title, New title)

getproperty(win, :title, String)

I guess this makes sense for a GUI toolkit where objects have lots of 
properties. In my case, I think it makes more sense to do things like (I 
dropped the get_ prefix from the getter methods)

n_samples(time_signal)
time_vector(time_signal)

instead of

get(time_signal,:n_samples)
get(time_signal,:time_vector)

Hopefully, I'm in the right direction here and using Julia's generic 
concepts. Not sure exactly what you mean by that...

Thanks a lot,

Adriano

On Thursday, July 30, 2015 at 12:34:58 PM UTC-3, j verzani wrote:

 That's maybe not the best package to look at for examples. It was written 
 quite awhile ago (and doesn't get any use as far as I can tell). The 
 `get_value` is generic in the sense that the main property for 
 different widgets might have different property names and this function 
 would just look it up based on the type of the widget.

 As for your original question, the use of indexing by symbols is inherited 
 from PyCall, and is only there because the dot isn't available. I don't 
 think it makes a good interface. It is awkward to type for starters. The 
 advice to use Julia's generic concepts, as possible, is a good one. An 
 example there would be the interface for Gtk through Gtk.jl.

 On Thursday, July 30, 2015 at 10:50:02 AM UTC-4, Adriano Vilela Barbosa 
 wrote:

 Thanks for your answer.

 Before posting my original question, I took a look at some packages 
 listed in

 http://pkg.julialang.org/

 just to see what people were doing. I didn't notice much of a pattern, 
 and that's why I decided to ask here. 

 For example, the package PySide (https://github.com/jverzani/PySide.jl) 
 offers both interfaces, so that it's possible to do either

 w = Qt.QWidget()# constructors
 w[:setWindowTitle](Hello world example) # w.setWindowTitle() is 
 w[:setWindowTitle] in PyCall

 or

 w = Widget()
 setWindowTitle(w, Hello world example (redux)) # methodName(object, 
 args...)

 At the end of that page, the author talks about generic methods such as 
 get_value() and set_value(), which makes me wonder if he's thinking of 
 things like

 get_value(obj,property_name)

 and

 set_value(obj,property_name,property_value)

 Maybe in the end it's just a matter of personal preference, at least for 
 now.

 Thanks a lot.

 Adriano



Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Jason Merrill
On Thursday, July 30, 2015 at 4:22:34 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 21:54:39 UTC+2, Jason Merrill wrote:

 Analysis of examples in the book


 Thanks for correcting me! The open/closed element becomes pretty crucial 
 later on though, when he claims on page 225 that:

 a general approach for evaluating polynomials with interval arguments 
 without any information loss is presented here for the first time.

  
 Two pages later he gives the general scheme for it (see attached picture - 
 it was too much of a pain to extract that text with proper formatting. This 
 is ok under fair use right?).

 Do you have any thoughts on that?


The fused polynomial evaluation seems pretty brilliant to me. He later goes 
on to suggest having a fused product ratio, which should largely allow 
eliminating the dependency problem from evaluating rational functions. You 
can get an awful lot done with rational functions.

https://lh3.googleusercontent.com/-f-sYnCMJFpQ/VbqE8zbN5AI/HOk/cNTnxAUAyoU/s1600/polynomial.pngI
 
actually think keeping track of open vs. closed intervals sounds like a 
pretty good idea. It might also be worth doing for other kinds of interval 
arithmetic, and I don't see any major reason that that would be impossible. 
I didn't meant to say that open vs closed intervals doesn't matter--I just 
meant that it doesn't seem to be the secret sauce in any of the challenge 
problems in Chapter 14. To me, the fused operations are the secret sauce in 
terms of precision, and the variable length representation *might be* the 
secret sauce for performance, but I can't really comment on that. 


Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Stefan Karpinski
Personally, I'm just trying to figure out what the secret sauce is.

   - Are unum operations associative? If so, then how is that accomplished?
   It seems like the answer is not really, at least not in the sense that
   one usually means it – i.e. that doing the operations in different orders
   produces the same results.


   - Is there some fundamental reason why unum's are better than intervals
   when it comes to limit divergence? The answer seems to be no – or at least
   that unums don't do anything that you couldn't also do to limit the
   divergence of intervals.

What do seem like interesting ideas are the inexact bit and variable
precision.

   - Using the inexact bit to represent either a value or an interval with
   the same type is clever and I do like how it covers *all* of the real
   number line. On the other hand, you can represent an exact value with a
   closed interval and equal endpoints.


   - Variable precision gives the type more flexibility than floats in much
   the same way that floats are more flexible than fixed-point numbers – it's
   a point even further on the flexibility versus complexity tradeoff. These
   days, probably a good tradeoff, given that we have more transistors than we
   know what to do with.

Are these things enough to warrant changing how all the hardware everywhere
in the world does arithmetic? It's certainly worth implementing and seeing
how well it works, and Julia is a uniquely good language for doing such
experiments.

On Thu, Jul 30, 2015 at 4:50 PM, Tom Breloff t...@breloff.com wrote:

 So I see a few recurring themes in this discussion:

 1) Floats can do anything Unums can do if you make them big enough.
 I mostly agree with this... But that's a similar argument to saying that we
 could just represent a UInt64 by an immutable with 8 UInt8 fields. Sure you
 could do it, but its not a very elegant solution.

 2) Unum intervals and float intervals are the same thing if they have the
 same precision. This I don't think I agree with, if there is an exact unum
 involved in the calc. I feel like incorporating this flag for exactness
 (the ubit) is the key point, and changes the results immensely.  You could
 just make an immutable with a Float64 and a Bool (the ubit) and
 mostly accomplish the same thing... So the Unum is just one way to
 accomplish this.

 3) we shouldn't explore alternatives to floats, because floats are what
 is currently optimized in hardware.  Really? Where's your adventurous
 spirit? There are some good ideas that, if embraced by a community of
 forward-thinking scientists, could most certainly be optimized in hardware
 someday.

 All this to say... I see promise in the concepts of flexible precision and
 exactness information, and I think it can be a more elegant medium to
 attack some problems.

 If unums and floats were both optimized in hardware and equivalently fast,
 I would likely choose unums all the time. With a software implementation,
 it would depend on the application whether there's any value. Either way,
 I'm working on a prototype and you can decide for yourself if you see any
 value in it.

 On Thursday, July 30, 2015, Stefan Karpinski ste...@karpinski.org wrote:

 It seems like you could apply all of these tricks to intervals to the
 same effect, and it would still be faster since Float64 ops are implemented
 in hardware. For example, given an Interval type, you can define
 square(::Interval) so that the lower bound is 0; you can also define
 dot(::Vector{Interval}, ::Vector{Interval}) cleverly, etc. (In fact, these
 things seem better done at the language level than at the hardware level.
 Moreover, none of this really addresses the fundamental issue – there is no
 systematic solution to the divergence problem is provided, just a
 collection of hacks to make slightly less bad in certain special
 circumstances.

 On Thu, Jul 30, 2015 at 3:54 PM, Jason Merrill jwmerr...@gmail.com
 wrote:

 On Thursday, July 30, 2015 at 3:10:24 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 16:07:46 UTC+2, Steven G. Johnson wrote:

 The problem is that if you interpret an exact unum as the open
 interval between two adjacent exact values, what you have is essentially
 the same as interval arithmetic.  The result of each operation will 
 produce
 intervals that are broader and broader (necessitating lower and lower
 precision unums), with the well known problem that the interval quickly
 becomes absurdly pessimistic in real problems (i.e. you quickly and
 prematurely discard all of your precision in a variable-precision format
 like unums).

 The real problem with interval arithmetic is not open vs. closed
 intervals, it is this growth of the error bounds in realistic computations
 (due to the dependency problem and similar).  (The focus on infinite and
 semi-infinite open intervals is a sideshow.  If you want useful error
 bounds, the important things are the *small* intervals.)

 If you discard the interval 

[julia-users] Changing Path to .Julia folder

2015-07-30 Thread dworkg1
I was in the process of installing cmake and visual studio and somehow my 
default Julia package and history folder changed from C:\Users\Me 
to C:\\work\\home\\.julia\\v0.3. How can I change it back? My .julia 
folder and .julia_history are all in  C:\Users\Me  but Pkg.dir() 
redirects to the new folder right now.


Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Job van der Zwan
On Thursday, 30 July 2015 21:54:39 UTC+2, Jason Merrill wrote:

 Analysis of examples in the book


Thanks for correcting me! The open/closed element becomes pretty crucial 
later on though, when he claims on page 225 that:

a general approach for evaluating polynomials with interval arguments 
 without any information loss is presented here for the first time.

 
Two pages later he gives the general scheme for it (see attached picture - 
it was too much of a pain to extract that text with proper formatting. This 
is ok under fair use right?).

Do you have any thoughts on that?

https://lh3.googleusercontent.com/-f-sYnCMJFpQ/VbqE8zbN5AI/HOk/cNTnxAUAyoU/s1600/polynomial.png


Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Stefan Karpinski
 Fused polynomials do seem like a good idea (again, can be done for
intervals too), but what is the end game of this approach? Is there some
set of primitives that are sufficient to express all computations you might
want to do in a way that doesn't lose accuracy too rapidly to be useful? It
seems like the reductio ad absurdum is producing a fused version of your
entire program that cleverly produces a correct interval.

On Thu, Jul 30, 2015 at 5:20 PM, Jason Merrill jwmerr...@gmail.com wrote:

 On Thursday, July 30, 2015 at 4:22:34 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 21:54:39 UTC+2, Jason Merrill wrote:

 Analysis of examples in the book


 Thanks for correcting me! The open/closed element becomes pretty crucial
 later on though, when he claims on page 225 that:

 a general approach for evaluating polynomials with interval arguments
 without any information loss is presented here for the first time.


 Two pages later he gives the general scheme for it (see attached picture
 - it was too much of a pain to extract that text with proper formatting.
 This is ok under fair use right?).

 Do you have any thoughts on that?


 The fused polynomial evaluation seems pretty brilliant to me. He later
 goes on to suggest having a fused product ratio, which should largely allow
 eliminating the dependency problem from evaluating rational functions. You
 can get an awful lot done with rational functions.


 https://lh3.googleusercontent.com/-f-sYnCMJFpQ/VbqE8zbN5AI/HOk/cNTnxAUAyoU/s1600/polynomial.pngI
 actually think keeping track of open vs. closed intervals sounds like a
 pretty good idea. It might also be worth doing for other kinds of interval
 arithmetic, and I don't see any major reason that that would be impossible.
 I didn't meant to say that open vs closed intervals doesn't matter--I just
 meant that it doesn't seem to be the secret sauce in any of the challenge
 problems in Chapter 14. To me, the fused operations are the secret sauce in
 terms of precision, and the variable length representation *might be* the
 secret sauce for performance, but I can't really comment on that.



Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Steven G. Johnson
People have devised methods for evaluation of polynomials with interval 
arithmetic, too (google it).  Not sure how his method compares.  It is well 
known that you can often work around the dependency problem for very 
specific expressions.

However, is not practical to tell people that they need to solve a new 
numerical-analysis research problem every time they have a new program in 
which a variable is used more than once (used more than once = dependency 
problem).

And if you don't solve the dependency problem, your error bounds are 
useless for general-purpose tasks.  And without accurate error bounds, 
adaptive precision is a non-starter.


[julia-users] Enumerating permutations

2015-07-30 Thread Christopher Fisher
I was wondering if there is a function for enumerating all of the 
permutations of size m from n elements, with repetitions allowed. For 
example, 3 permutations of [1 0] would be [ 1 1 1;1 1 0;1 0 1;0 1 1;1 0 0; 
0 1 0;0 0 1; 0 0 0].  (Analogous 
to 
http://www.mathworks.com/matlabcentral/fileexchange/11462-npermutek/content/npermutek.m)

Along similar lines, I was wondering if there is a function for 
permutations without repetitions, such as 2 elements from [1 2 3] is [1 2;1 
3;2 3;2 1;3 1;3 2]. I see that there is a permutations function but it only 
enumerates permutations of the same size as the original set. 

Thank you


[julia-users] Re: Syntax Highlighting in Vim

2015-07-30 Thread Ratan Sur
It seems like vim thinks ; is the comment char for julia when it's actually 
#. Do you know how one might fix this?

On Sunday, December 15, 2013 at 2:18:21 AM UTC-5, Thomas Moore wrote:

 I recently installed Ubuntu 12.04, and from there installed Julia through 
 githib. Now, when I open a .jl file in Vim, it seems there's some sort of 
 syntax highlighting which works, but it's inconsistent (words like 
 function, print and if are coloured, but others like for and 
 while are not.)

 Does anyone have any idea what's wrong here? Alternatively, I'm not really 
 attached to any editor yet in Ubuntu, and so if there's an easier way to 
 set up another editor with syntax highlighting, feel free to recommend it.

 Thanks



[julia-users] efficient use of shared arrays and @parallel for

2015-07-30 Thread thr
Hi all,

I'm implementing a basic explicit advection algorithm of the form:
 
   for t = 1:T-1
for j = 3:n-2
for i = 3:m-2
q[i,j,t+1]= timestep(q[i,j,t], u[i,j,t])
end
end 
end 


where q is a quantity and u a velocity field.
I'd like to parallelize this by using sharded arrays and @parallel for, I 
tried the following:

const n = 500
const m = 500
const T = 500

@everywhere function timestep(x,y)
#return x+y
return x+y +x+y +x+y +x+y +x+y +x+y +x+y
end

function advection_ser(q, u)
println(==serial=$n x $m x $T)
for t = 1:T-1
for j = 3:n-2
for i = 3:m-2
q[i,j,t+1]= timestep(q[i,j,t], u[i,j,t])
end
end
end
return q
end

function advection_par(q,u)
println(==parallel=$n x $m x $T)
for t = 1:T-1
@sync @parallel for j = 3:n-2
for i = 3:m-2
q[i,j,t+1]= timestep(q[i,j,t], u[i,j,t])
end
end
end
return q
end

q= SharedArray(Float64, (m,n,T), init=false)
u= SharedArray(Float64, (m,n,T), init=false)

@time qs  = advection_ser(q,u)
@time qp  = advection_par(q,u)




But this yields only a very moderate speed gain: the parallel version is 
about 1/3 faster than the serial version for m,n,T=500,500,500 and -p 4. 
Is there a way I can improve on this?

I have also seen some weird behaviour regarding shared arrays and I'd like 
to verify that I'm not just doing it wrong before opening issues: 

1. When I construct q inside of the advection function, @code_warntype 
tells me that it's handled as an 'any' and the code is much slower. 
However, typeof(q) tells me it's of type SharedArray{Float64,3} as it 
should be.

2. I'm pretty sure there's a memory hole associated with SharedArrays, for 
when I start above program over and over eventually I get a bus error and 
julia crashes. Do I have to somehow release the shared memory from the 
workers? 

Thanks in advance, Johannes


[julia-users] Re: Errors while trying to use cxx and embed Julia together

2015-07-30 Thread Jeff Waller

Specifically I think this will not work because of the following: 
 

 double ArrayMaker::ArrayMak(int iNum, float fNum) {

 jl_init(JULIA_INIT_DIR);


 jl_atexit_hook();
 return sol;
 }
 }


What this is doing it starting up a 2nd julia engine inside of the original 
julia engine, and that can only work if julia has
no globals that must only be initialized once and all mutual exclusive 
sections protected by mutex, (i.e. thread safe),
and currently it is not. 

In other words you're going beyond the question can Julia be embedded in 
C++ as well as can C++ be embedded
in Julia simultaneously; answer is yes.  You're asking can Julia be 
embedded within itself, and I think the answer is
no.

But from what you're describing as the problem you're attempting to  solve, 
you don't really to accomplish Julia inside
Julia. You just need to call C++ from Julia.

What happens if you just simply leave out the calls to jl_init and 
jl_atexit_hook?



Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Job van der Zwan
On Thursday, 30 July 2015 00:33:52 UTC+2, Job van der Zwan wrote:

 BTW, Tom, I was already working on a summary of the book (on an IJulia 
 notebook). I'm on mobile right now so don't have access to it, but I can 
 share it later. I think something like that might be useful to attract more 
 collaborators - we can't expect everyone to read it.


Ok, so since Tom is already working on a package, I moved my 
summary-in-progress to Google Drive where it's easier for people to leave 
comments:

https://docs.google.com/document/d/1d36_ppKeZDuYRadLm9-Ty8Ai2XZE5MS5bwIuEKBJ1WE/edit?usp=sharing
 

For others who have read the book, please correct any errors or 
misunderstandings on my part that you see. Expanding sections is also 
encouraged :P

Right now it's very bare-bones (since the meat is what you *can do* with 
unums, not the definition of the format itself), but I'll hopefully get 
around to expanding it a bit in the coming weeks.


[julia-users] remote workers more efficient than local workers?

2015-07-30 Thread Deniz Yuret
Here is a parallel program:

M = [rand(1000,1000) for i=1:16]
@time pmap(svd, M)

Here are timing results for local workers on a 16 core machine1:

julia -p 2: 14.98 secs
julia -p 4: 16.02 secs
julia -p 8: 17.64 secs

Here are timing results for machine1 connecting to remote workers on same 
type of machine2:

julia --machinefile 2 copies of machine2: 11.75 secs
julia --machinefile 4 copies of machine2: 7.54 secs
julia --machinefile 8 copies of machine2: 6.46 secs

At first I thought things got messed up if the master and the slaves were 
on the same machine.
But it turns out the difference is between -p n vs. --machinefile.  If I 
rerun the same test on
a single machine, but use --machinefile instead of -p n:

julia --machinefile 2 copies of machine1: 8.41 secs
julia --machinefile 4 copies of machine1: 4.70 secs
julia --machinefile 8 copies of machine1: 3.31 secs

I am using Julia Version 0.3.9 (2015-05-30 11:24 UTC).

Why is -p n messed up?

thanks,
deniz



[julia-users] How can I change the R code to Julia code. Thank you.

2015-07-30 Thread meibujun
Hello, I do not know how to change list() c(sib1,sib2) and 
a[[v]][inhe[[v]]==par] -
par.al[[par]][inhe[[v]]==par] in R to julia code.
It is error:
ERROR: BoundsError()
 in getindex at array.jl:246 (repeats 2 times)

#R code
n.ped - nrow(sib1$pat)
n.mark - ncol(sib1$pat)
n.al - length(f)
par.al - list()
for(par in 1:4) par.al[[par]] -
matrix(sample(1:n.al,n.ped*n.mark,
replace=TRUE,prob=f),n.ped,n.mark)
a - inhe - c(sib1,sib2)
for (v in 1:4) for (par in 1:4)
a[[v]][inhe[[v]]==par] -
par.al[[par]][inhe[[v]]==par]


###julia code
nped=size(sib1,1)
nmark=size(sib1,2)
nal=size(f,2) 
paral=Dict() 
for (par=1:4)
  paral[[par]]=reshape(wsample([1:nal],vec(f),nped*4*nmark),nped*4,nmark)
end 
#a=inhe=hcat(sib1,sib2,sib3,sib4)
a=inhe=(Sib1,Sib2,Sib3,Sib4)
a=inhe=convert(DataFrame,a) 
println(mbjok)
for (v=1:4) for(par=1:4)
  println(v,par)
  if inhe[[v]]==par
a[v,inhe[v]]=paral[par,inhe[v]]
 end   
end end 



[julia-users] Re: Irregular Interpolation

2015-07-30 Thread Andrew
That sounds useful. I've been using Dierckx for my economics work to get 
interpolation on nonlinear grids, but it only goes to 2D. I haven't yet had 
the need for anything higher, but it's good to know there's stuff out there.

On Wednesday, July 29, 2015 at 1:49:28 PM UTC-4, Spencer Lyon wrote:

 It's currently going through a major overhaul to get ready for public 
 consumption, but CompEcon.jl https://github.com/spencerlyon2/CompEcon.jl 
 provides 
 a Julia implementation of the popular (amongst economists) CompEcon matlab 
 toolbox.

 It does irregular interpolation for an arbitrary number of dimensions 
 using chebyshev, b-spline (of arbitrary order), or piecewise linear basis 
 functions.

 If you are interested, let me know. I haven't had time to write docs yet, 
 but once I do it should be pretty straightforward. 

 On Wednesday, July 29, 2015 at 10:44:37 AM UTC-4, Nils Gudat wrote:

 Besided GridInterpolations, you might want to look at ApproXD.jl, which 
 works up to four dimensions. I once wrote a little script that compares 
 different one- and two-dimensional interpolation schemes in Julia, it can 
 be found here 
 https://github.com/nilshg/LearningModels/blob/master/Test_Interpolations.jl
  
 (I might have to update it to include GridInterpolations at some point).



[julia-users] Re: efficient use of shared arrays and @parallel for

2015-07-30 Thread thr
I also noticed a lot more Any-type warnings in the parallel version when 
run with @code_warntype. I tried to annotate the types almost everywhere, 
it didn't help.



[julia-users] How can I change the R code to Julia code. It is always error.Thank you!

2015-07-30 Thread meibujun
Hello, I do not know how to list() c(sib1,sib2) and 
a[[v]][inhe[[v]]==par] -
par.al[[par]][inhe[[v]]==par] in julia to julia code.
It is error:
ERROR: BoundsError()
 in getindex at array.jl:246 (repeats 2 times)

#R code
n.ped - nrow(sib1$pat)
n.mark - ncol(sib1$pat)
n.al - length(f)
par.al - list()
for(par in 1:4) par.al[[par]] -
matrix(sample(1:n.al,n.ped*n.mark,
replace=TRUE,prob=f),n.ped,n.mark)
a - inhe - c(sib1,sib2)
for (v in 1:4) for (par in 1:4)
a[[v]][inhe[[v]]==par] -
par.al[[par]][inhe[[v]]==par]


###julia code
nped=size(sib1,1)
nmark=size(sib1,2)
nal=size(f,2) 
paral=Dict() 
for (par=1:4)
 paral[[par]]=reshape(wsample([1:nal],vec(f),nped*4*nmark),nped*4,nmark)
end 
#a=inhe=hcat(sib1,sib2,sib3,sib4)
a=inhe=(Sib1,Sib2,Sib3,Sib4)
a=inhe=convert(DataFrame,a) 
println(mbjok)
for (v=1:4) for(par=1:4)
 println(v,par)
 if inhe[[v]]==par
   a[v,inhe[v]]=paral[par,inhe[v]]
 end  
end end 



[julia-users] Re: Syntax Highlighting in Vim

2015-07-30 Thread Tero Frondelius
If you are new to Vim, you might also want to consider 
Juno http://junolab.org/. 

On Sunday, December 15, 2013 at 9:18:21 AM UTC+2, Thomas Moore wrote:

 I recently installed Ubuntu 12.04, and from there installed Julia through 
 githib. Now, when I open a .jl file in Vim, it seems there's some sort of 
 syntax highlighting which works, but it's inconsistent (words like 
 function, print and if are coloured, but others like for and 
 while are not.)

 Does anyone have any idea what's wrong here? Alternatively, I'm not really 
 attached to any editor yet in Ubuntu, and so if there's an easier way to 
 set up another editor with syntax highlighting, feel free to recommend it.

 Thanks



Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Job van der Zwan
On Thursday, 30 July 2015 16:07:46 UTC+2, Steven G. Johnson wrote:

 The problem is that if you interpret an exact unum as the open interval 
 between two adjacent exact values, what you have is essentially the same as 
 interval arithmetic.  The result of each operation will produce intervals 
 that are broader and broader (necessitating lower and lower precision 
 unums), with the well known problem that the interval quickly becomes 
 absurdly pessimistic in real problems (i.e. you quickly and prematurely 
 discard all of your precision in a variable-precision format like unums).

 The real problem with interval arithmetic is not open vs. closed 
 intervals, it is this growth of the error bounds in realistic computations 
 (due to the dependency problem and similar).  (The focus on infinite and 
 semi-infinite open intervals is a sideshow.  If you want useful error 
 bounds, the important things are the *small* intervals.)

 If you discard the interval interpretation with its rapid loss of 
 precision, what you are left with is an inexact flag per value, but with no 
 useful error bounds. And I don't believe that this is much more useful than 
 a single inexact flag for a set of computations as in IEEE.


The thing is, these are *exactly *the criticisms Gustafson has of 
traditional interval arithmetic. In fact, he's even more critical of 
interval arithmetic than he is of floats, as far as I can see. However, he 
claims that ubounds don't share the absurd pessimism problem. Supposedly, 
traditional interval arithmetic by necessity needs to be more pessimistic 
about its boundaries due to rounding, and only using closed endpoint 
instead of allowing for open intervals. Unums instead are (supposedly) more 
precise about the information loss they have, and thus (supposedly) don't 
blow up as badly. Again, his claims, not mine. I'm not saying you're wrong, 
or even sure if you disagree as much as you might think you are (although 
I'm pretty sure you wouldn't like the tone he uses when describing 
traditional methods though).

I agree with the others about the grain of salt (unums/ubounds/uboxes *always 
*come out on top in his examples, which does make you wonder), but on the 
other hand: given that the mathematica implementation of his methods are 
open source, his claims *should* be verifiable (they can be found here 
under Downloads/Updates 
https://www.google.com/url?q=https%3A%2F%2Fwww.crcpress.com%2FThe-End-of-Error-Unum-Computing%2FGustafson%2F9781482239867sa=Dsntz=1usg=AFQjCNG9ezAr5A_BTmpUT6WdVBIYDvaIhA,
 
Simon Byrne linked it earlier. I also found a Python port 
https://github.com/jrmuizel/pyunum).


[julia-users] Recommended way of implementing getters and setters

2015-07-30 Thread Tomas Lycken
For getters, I would opt for the latter approach with methods which you call on 
the time signal. However, I would also encourage you to extend already existing 
functions rather than creating new ones. For example, you could get the number 
of samples as Base.length(ts::TimeSignal). Sometimes it may make sense to 
provide more than one argument to these functions. 

For setters, I'd do the same thing where it makes sense, but make sure to 
follow the convention that the function name ends with a bang (!) and that the 
time signal is the first argument, eg det stuff!(ts::TimeSignal, stuff...) 

Good luck! 

[julia-users] Re: ANN: Testing specific Julia versions on Travis CI

2015-07-30 Thread Michael Prentiss
This is great progress.  
Along these lines is there a way for doing bench marking against different 
versions of the code?

On Thursday, July 30, 2015 at 7:20:06 AM UTC-5, Tony Kelman wrote:

 Hey folks, an announcement for package authors and users who care about 
 testing:

 We've had support for Julia package testing on Travis CI 
 http://travis-ci.org for almost 9 months now, ref 
 https://groups.google.com/forum/#!msg/julia-users/BtCxh4k9hZA/ngUvxdxOxQ8J 
 if you missed the original announcement. Up to this point we supported the 
 following settings for which Julia version to test against:

 language: julia
 julia:
 - release
 - nightly

 Release has meant the latest release version in the 0.3.x series, and 
 nightly has meant the latest nightly build of 0.4-dev master. Once Julia 
 0.4.0 gets released, the meaning of these settings will change, where 
 release will be the latest version in the 0.4.x series, and nightly will be 
 the latest nightly build of 0.5-dev master. Considering the wide install 
 base and number of packages that may want to continue supporting 0.3 even 
 after 0.4.0 gets released, we've just added support for additional version 
 options in your .travis.yml file. You can now do

 julia: 
 - release
 - nightly
 - 0.3

 Or, if you want to test with specific point releases, you can do that too 
 (there should not usually be much need for this, but it could be useful 
 once in a while to compare different point releases):

 julia: 
 - release
 - nightly
 - 0.3
 - 0.3.10

 The oldest point release for which we have generic Linux binaries 
 available is 0.3.1. If you enable multi-os support for your repository (see 
 http://docs.travis-ci.com/user/multi-os/), then you can go back as far as 
 0.2.0 on OS X. Note that you'd need to replace the default test script with 
 the old-fashioned `julia test/runtests.jl` since `Pkg.test` and 
 `--check-bounds=yes` are not supported on Julia version 0.2.x. The 
 downloads of those versions would fail on Linux workers so you may need to 
 set up a build matrix with excluded jobs (see 
 http://docs.travis-ci.com/user/customizing-the-build/#Build-Matrix).

 Let us know if you have any questions or issues.

 Happy testing,
 Tony (with thanks to @ninjin and @staticfloat for PR review)



Re: [julia-users] Re: ANN: Testing specific Julia versions on Travis CI

2015-07-30 Thread Stefan Karpinski
Hardware for automated performance tracking has been ordered and should
arrive next month.

On Thu, Jul 30, 2015 at 12:13 PM, Michael Prentiss mcprent...@gmail.com
wrote:

 This is great progress.

 Similarly, is there a way for benchmarking on different versions of the
 code?
 Automating this will be very helpful.





Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Jeffrey Sarnoff
If correct rounding is a goal:

For almost all Float64 arguments to elementary functions, working with 
120bit significands will assure an accurately rounded result almost always 
and working with 168bit significand obtains a correctly rounded Float64 
value all the time (at least for the functions I have seen analyzed).  An 
accurately rounded result is obtainable working with less precision much of 
the time, say 80bits of significand (just a guess); rarely will the 
required precision be that of the input.

see the papers of Vincent Lefevre e.g.
  Worst Cases for Rounding Elementary Functions in Double Precision 
http://perso.ens-lyon.fr/jean-michel.muller/TMDworstcases.pdf
  Worst Cases for the Exponential Function in decimal64 
http://perso.ens-lyon.fr/damien.stehle/downloads/decimalexp.pdf




On Thursday, July 30, 2015 at 11:08:50 AM UTC-4, Tom Breloff wrote:

 It's better in the sense that you have a reason to try it with a larger 
 type.  You know exactly how much precision you've lost, and so you can 
 decide to use up to 1024 bits for intermediate calculations if you need 
 to.  If sqrt(2) is part of your calculation, the inexact field for floats 
 will be set no matter the calculation, and you only know that my answer is 
 always wrong.  I wouldn't exactly call this a useful/actionable statement.

 On Thu, Jul 30, 2015 at 10:58 AM, Stefan Karpinski ste...@karpinski.org 
 javascript: wrote:

 This doesn't seem any better than try the computation with Float128s.

 On Thu, Jul 30, 2015 at 10:27 AM, Tom Breloff t...@breloff.com 
 javascript: wrote:

 Steven: There is a section in the book dedicated to writing dynamically 
 scaling precision/accuracy into your algorithms.  The idea is this:

 - Pick a small format unum at the start of your algorithm.
 - During the algorithm, check your unums for insufficient 
 precision/accuracy in the final interval.
 - As soon as you discover the intervals getting too large, restart with 
 a new unum environment.

 Obviously this type of resetting shouldn't be default behavior, but the 
 point is that you have as much flexibility as you need to precisely define 
 the level of detail that you care about, and there is sufficient 
 information in your result that you can re-run with better settings if the 
 result is unsatisfactory.  

 The problem with floats is that you can get the result of a black-box 
 calculation and have NO IDEA how wrong you are... only that your solution 
 is not exact.  This concept should make you skeptical of every float 
 calculation that results with the inexact flag being set.


 On Thu, Jul 30, 2015 at 10:07 AM, Steven G. Johnson steve...@gmail.com 
 javascript: wrote:

 On Wednesday, July 29, 2015 at 5:47:50 PM UTC-4, Job van der Zwan wrote:

 On Thursday, 30 July 2015 00:00:56 UTC+3, Steven G. Johnson wrote:

 Job, I'm basing my judgement on the presentation.


 Ah ok, I was wondering I feel like those presentations give a general 
 impression, but don't really explain the details enough. And like I said, 
 your critique overlaps with Gustafson's own critique of traditional 
 interval arithmetic, so I wasn't sure if you meant that you don't buy his 
 suggested alternative ubox method after reading the book, or indicated 
 scepticism based on earlier experience, but without full knowledge of 
 what 
 his suggested alternative is.


 From the presentation, it seemed pretty explicit that the ubox method 
 replaces a single interval or pair of intervals with a rapidly expanding 
 set of boxes.  I just don't see any conceivable way that this could be 
 practical for large-scale problems involving many variables.
  

 Well.. we give up one bit of *precision* in the fraction, but *our 
 set of representations is still the same size*. We still have the 
 same number of floats as before! It's just that half of them is now exact 
 (with one bit less precision), and the other half represents open 
 intervals 
 between these exact numbers. Which lets you represent the entire real 
 number line accurately (but with limited precision, unless they happen to 
 be equal to an exact float). 


 Sorry, but that just does not and cannot work.

 The problem is that if you interpret an exact unum as the open interval 
 between two adjacent exact values, what you have is essentially the same 
 as 
 interval arithmetic.  The result of each operation will produce intervals 
 that are broader and broader (necessitating lower and lower precision 
 unums), with the well known problem that the interval quickly becomes 
 absurdly pessimistic in real problems (i.e. you quickly and prematurely 
 discard all of your precision in a variable-precision format like unums).

 The real problem with interval arithmetic is not open vs. closed 
 intervals, it is this growth of the error bounds in realistic computations 
 (due to the dependency problem and similar).  (The focus on infinite and 
 semi-infinite open intervals is a sideshow.  If you want useful error 
 bounds, 

[julia-users] Re: ANN: Testing specific Julia versions on Travis CI

2015-07-30 Thread Michael Prentiss
That is great news.   Well done. 


[julia-users] Re: John L. Gustafson's UNUMs

2015-07-30 Thread Jeffrey Sarnoff
+1 for grain of salt

On Saturday, July 25, 2015 at 9:11:54 AM UTC-4, Job van der Zwan wrote:

 So I came across the concept of UNUMs on the Pony language mailing list 
 http://lists.ponylang.org/pipermail/ponydev/2015-July/71.html this 
 morning. I hadn't heard of them before, and a quick search doesn't show up 
 anything on this mailing list, so I guess most people here haven't either. 
 They're a proposed alternate encoding for numbers by John L. Gustafson. 
 This presentation by him sums it up nicely:

 http://sites.ieee.org/scv-cs/files/2013/03/Right-SizingPrecision1.pdf

 “Unums”(universal numbers) are to floating point what floating point is to 
 fixed point.
 Floating-point values self-describe their scale factor, but fix the 
 exponent and fraction size. Unums self-describe the exponent size, fraction 
 size, and inexact state, and include fixed point and IEEE floats as special 
 cases.


 The presentation can be seen here, provided you have the Silverlight 
 plugin:


 http://sites.ieee.org/scv-cs/archives/right-sizing-precision-to-save-energy-power-and-storage

 Now, I don't know enough about this topic to say if they're a good or bad 
 idea, but I figured the idea is interesting/relevant enough to share with 
 the Julia crowd.

 I'm also wondering if they could be implemented (relatively) easily within 
 Julia, given its flexible type system. If so, they might provide an 
 interesting advanced example, no?



Re: [julia-users] Re: C function vs 64 bit arithmetic in julia

2015-07-30 Thread Stefan Karpinski
If you put the code in a function and don't do anything that makes types
unpredictable, you will get the exact same code you would in C.

On Thu, Jul 30, 2015 at 2:05 PM, Jeffrey Sarnoff jeffrey.sarn...@gmail.com
wrote:

 It has been my experience that, with an appropriate choice of data
 structure and straightforward lines of code, Julia is better.
 The Julia realization will be fast enough .. for the operations you need
 2x-3x C, once the loop executes, and it is much less
 hassle, and easier to maintain.  There are ways to do it wrong, and incur
 uneeded overhead.
 I defer to others to give you specific guidance.


 On Thursday, July 30, 2015 at 1:40:34 PM UTC-4, Forrest Curo wrote:

 I want to turn an unsigne64 into bytes, chew on the bytes,  rearrange
 into a new unsigned64.

 Should I expect significant gain by reading it into a C function to make
 it a union of char and unsigned64, take out the chars  put the new ones
 back into that union --

 or should it be close enough in speed to stay in julia,
 with something like:

 for i = 1:8
  bites[i] = x  255
  x = 8
 end

 [doing stuff to bites]

 x = 0
 for i = 1:8
  x += bites[i]
 end
 ?




Re: [julia-users] ANN: StructsOfArrays.jl

2015-07-30 Thread Matt Bauman
I absolutely love that it's only 43 lines of code!

On Thursday, July 30, 2015 at 2:14:50 PM UTC-4, Stefan Karpinski wrote:

 The ease with which you were able to put that together is pretty amazing.

 On Thu, Jul 30, 2015 at 1:38 PM, Simon Kornblith si...@simonster.com 
 javascript: wrote:

 Yichao, Oscar, and I were unhappy with the current state of vectorization 
 of operations involving complex numbers and other immutables so I decided 
 to do something about it. I'm pleased to announce StructsOfArrays.jl 
 https://github.com/simonster/StructsOfArrays.jl, which performs the 
 Array of Structures - Structure of Arrays memory layout optimization 
 without requiring code changes. This alternative memory layout permits SIMD 
 optimizations for immutables for which such optimizations would not 
 otherwise be possible or profitable, either because of limitations of the 
 Julia codegen and LLVM optimizer or because of the type of the operations 
 performed. The benchmark in the README shows that StructsOfArrays can give 
 non-negligible speedups for simple operations involving arrays of complex 
 numbers.

 Simon




Re: [julia-users] Re: C function vs 64 bit arithmetic in julia

2015-07-30 Thread Tim Holy
...with the possible added bonus that it might be inlined, in which case pure-
julia will likely be faster than calling a C library.

--Tim

On Thursday, July 30, 2015 02:18:16 PM Stefan Karpinski wrote:
 If you put the code in a function and don't do anything that makes types
 unpredictable, you will get the exact same code you would in C.
 
 On Thu, Jul 30, 2015 at 2:05 PM, Jeffrey Sarnoff jeffrey.sarn...@gmail.com
 wrote:
  It has been my experience that, with an appropriate choice of data
  structure and straightforward lines of code, Julia is better.
  The Julia realization will be fast enough .. for the operations you need
  2x-3x C, once the loop executes, and it is much less
  hassle, and easier to maintain.  There are ways to do it wrong, and incur
  uneeded overhead.
  I defer to others to give you specific guidance.
  
  On Thursday, July 30, 2015 at 1:40:34 PM UTC-4, Forrest Curo wrote:
  I want to turn an unsigne64 into bytes, chew on the bytes,  rearrange
  into a new unsigned64.
  
  Should I expect significant gain by reading it into a C function to make
  it a union of char and unsigned64, take out the chars  put the new ones
  back into that union --
  
  or should it be close enough in speed to stay in julia,
  with something like:
  
  for i = 1:8
  
   bites[i] = x  255
   x = 8
  
  end
  
  [doing stuff to bites]
  
  x = 0
  for i = 1:8
  
   x += bites[i]
  
  end
  ?



[julia-users] Re: Errors while trying to use cxx and embed Julia together

2015-07-30 Thread Kostas Tavlaridis-Gyparakis
Hello again,
After following the above mentioned instructions of Jeff managed to make 
Cxx working properly in the julia version
installed via (make install), yet again the original problem persists.
Just to remind you:

I have the following c++ class:

1) cpp.file:
#include ArrayMaker.h
#include iostream

using namespace std;


ArrayMaker::ArrayMaker() {
// TODO Auto-generated constructor stub

}

ArrayMaker::~ArrayMaker() {
// TODO Auto-generated destructor stub
}


double ArrayMaker::ArrayMak(int iNum, float fNum) {

jl_init(JULIA_INIT_DIR);

   jl_load(/home/kostav/.julia/v0.4/loleee/src/loleee.jl);
   jl_value_t * mod = (jl_value_t*)jl_eval_string(loleee);
   jl_function_t * func = 
jl_get_function((jl_module_t*)mod,funior);


jl_value_t * argument = jl_box_float64(2.0);
jl_value_t * argument2 = jl_box_float64(3.0);
jl_value_t * ret = jl_call2(func, argument, argument2);

sol =  jl_unbox_float64(ret);

jl_atexit_hook();
return sol;
}
}

2) header file:
#ifndef ARRAYMAKER_H
#define ARRAYMAKER_H

#include julia.h

class ArrayMaker
{
public:
ArrayMaker();
virtual ~ArrayMaker();
double ArrayMak(int, float);
};
#endif

Which are called isnide this small julia file:

using Cxx

# Importing shared library and header file
const path_to_lib = /home/kostav/Documents/Project/julia/cle   
  
addHeaderDir(path_to_lib, kind=C_System)
Libdl.dlopen(path_to_lib * /test.so, Libdl.RTLD_GLOBAL)

cxxinclude(ArrayMaker.h)
maker = @cxxnew ArrayMaker()

a = @cxx maker-ArrayMak(5, 2.0)
 println(return value is: , a)


As described above what I am trying is to call the c++ function ArrayMak 
from julia file,
where inside this function I call an other julia function (called name 
funior) that just receives
two numbers as arguments and returns their sum.
More or less want to verify that I can have the following process:
Julia file - Calling C++ classes - calling inside them other julia 
functions - returning arguments from the C++ classes to the original julia 
file.
So when I run the julia file everything works until the moment that the c++ 
function needs to return the argument to julia file, I have checked
and confirmed that until that point everything works proprely:
1) C++ function ArrayMak is called
2) Function funior is receiving the arguments properly and returns to the 
variable sol the sum of the two arguments
But when the value of sol needs to be returned to the julia function 
everythins stucks.
Judging from the example in the cxx web-site in order to return arguments 
from c++ functions to julia you don't need to do anything complicated
or different from what is shown here:


arr = @cxx maker-fillArr() (where fillArr is supposed to return an array)

So, I am not sure what is going wrong here...




On Wednesday, July 22, 2015 at 9:33:00 AM UTC+2, Kostas 
Tavlaridis-Gyparakis wrote:

 Any help/further suggestions please?

 On Tuesday, July 21, 2015 at 3:27:10 PM UTC+2, Kostas Tavlaridis-Gyparakis 
 wrote:

 Hello again,
 Unfortunately and let me apologize in advance I still find myself 
 confused on how to make things 
 work properly.

 Here are the symbolic links.  I believe with some simple changes to Cxx, 
 the need for these
 will disappear, but for now there's where it searches.

 /usr/local/julia/lib

 bizarro% ls -l

 total 16

 drwxr-xr-x   4 jeffw  staff   136 Jul 18 18:26 clang

 lrwxr-xr-x   1 root   staff10 Jul 19 03:16 include - ../include

 drwxr-xr-x  49 jeffw  staff  1666 Jul 18 20:11 julia

 lrwxr-xr-x   1 root   staff 1 Jul 19 03:16 lib - .


 If I get this right does it mean that for all the files that are included 
 in the list   * DirectoryLayout* 
 https://gist.github.com/waTeim/ec622a0630f220e6b3c3#file-directorylayout
 
 I need to create a soft link to link them with the directory 
 /usr/local/julia/lib (which doesn't exist
 in my system)?
 Aslo regarding the instructions of part *BuildJuliaLLVMforCXX* 
 https://gist.github.com/waTeim/ec622a0630f220e6b3c3#file-buildjuliallvmforcxx
  
 where you present the files that
 need to be cp to the directory julia-f428392003 in my system I face the 
 following problems:

 1) The image files (sys.ji, sys.dylib,sys-debug.dylib) don't exist 
 anywhere in my system
 2) The folder llvm-svn (in the source julia directory) has completely 
 different structure in my system,
 In my julia (source code) folder there are two folders with the name 
 llvm-svn the one has a structure
 that you can look in the attach picture called name (llvmsn), while the 
 second one is located in the
 path /home/kostav/julia/usr-staging/llvm-svn and has a single folder 
 named build_Release+Asserts.

 All in all I am confused on how to follow your instructions (for which I 
 do really appreciate the effort you
 put to provide me with). Is what I have to do:

 1) Try to copy paste the folders as presented in *BuildJuliaLLVMforCXX* 
 

Re: [julia-users] Re: John L. Gustafson's UNUMs

2015-07-30 Thread Tom Breloff
How about moving the discussion here:
https://github.com/tbreloff/Unums.jl/issues/2

On Thu, Jul 30, 2015 at 1:09 PM, Jeffrey Sarnoff jeffrey.sarn...@gmail.com
wrote:

 +1 for grain of salt


 On Saturday, July 25, 2015 at 9:11:54 AM UTC-4, Job van der Zwan wrote:

 So I came across the concept of UNUMs on the Pony language mailing list
 http://lists.ponylang.org/pipermail/ponydev/2015-July/71.html this
 morning. I hadn't heard of them before, and a quick search doesn't show up
 anything on this mailing list, so I guess most people here haven't either.
 They're a proposed alternate encoding for numbers by John L. Gustafson.
 This presentation by him sums it up nicely:

 http://sites.ieee.org/scv-cs/files/2013/03/Right-SizingPrecision1.pdf

 “Unums”(universal numbers) are to floating point what floating point is
 to fixed point.
 Floating-point values self-describe their scale factor, but fix the
 exponent and fraction size. Unums self-describe the exponent size, fraction
 size, and inexact state, and include fixed point and IEEE floats as special
 cases.


 The presentation can be seen here, provided you have the Silverlight
 plugin:


 http://sites.ieee.org/scv-cs/archives/right-sizing-precision-to-save-energy-power-and-storage

 Now, I don't know enough about this topic to say if they're a good or bad
 idea, but I figured the idea is interesting/relevant enough to share with
 the Julia crowd.

 I'm also wondering if they could be implemented (relatively) easily
 within Julia, given its flexible type system. If so, they might provide an
 interesting advanced example, no?




[julia-users] Serializing custom types

2015-07-30 Thread Marc Gallant
Given the following types:

type Bar
x::Float64
y::Int
end


type Foo
x::Vector{Float64}
y::Bar
z::Matrix{Float64}
end


and the following vector:

a = [Foo([1.1, 2.2], Bar(1.1, 4), rand(2, 2)), Foo([1.3, 2.4], Bar(-1.1, 
2), rand(2, 2))]


Do you have any suggestions on how would I go about serializing a? I have a 
analogous situation where I do a lot of number crunching resulting in 
something that resembles a, but has thousands of entries. I'd like to store 
a in a file and read it in later. I've looked into the HDF5 package but I'm 
having a hard time figuring out a nice way to disassemble my custom types 
so they can be written, and then reconstructing them after the data is 
retrieved.

I'm using Julia 0.3.10.

Thanks!





Re: [julia-users] Serializing custom types

2015-07-30 Thread Tim Holy
Try the JLD package?

--Tim

On Thursday, July 30, 2015 08:57:23 AM Marc Gallant wrote:
 Given the following types:
 
 type Bar
 x::Float64
 y::Int
 end
 
 
 type Foo
 x::Vector{Float64}
 y::Bar
 z::Matrix{Float64}
 end
 
 
 and the following vector:
 
 a = [Foo([1.1, 2.2], Bar(1.1, 4), rand(2, 2)), Foo([1.3, 2.4], Bar(-1.1,
 2), rand(2, 2))]
 
 
 Do you have any suggestions on how would I go about serializing a? I have a
 analogous situation where I do a lot of number crunching resulting in
 something that resembles a, but has thousands of entries. I'd like to store
 a in a file and read it in later. I've looked into the HDF5 package but I'm
 having a hard time figuring out a nice way to disassemble my custom types
 so they can be written, and then reconstructing them after the data is
 retrieved.
 
 I'm using Julia 0.3.10.
 
 Thanks!



[julia-users] Re: ANN: Testing specific Julia versions on Travis CI

2015-07-30 Thread Michael Prentiss
This is great progress. 

Similarly, is there a way for benchmarking on different versions of the 
code?
Automating this will be very helpful.




Re: [julia-users] Re: John L. Gustafson's UNUMs

2015-07-30 Thread Jason Merrill
On Wednesday, July 29, 2015 at 5:31:12 PM UTC-4, Stefan Karpinski wrote:

 The most compelling part of the proposal to me was the claim of 
 associativity, which I suppose comes along with the variable precision 
 since you can actually drop trailing bits that you can't get right.


I bought a copy of the book, because I'm a sucker for this kind of thing. 
There's a lot of fascinating material in the book, and I would generally 
recommend it, with the caveat that it seems like some of it needs to be 
taken with a grain of salt. Remember that it's a manuscript that hasn't 
gone through the kind of peer review that journal articles do.

Associativity sounded pretty exciting to me, too, but you have to do 
special work to get it. If a, b, and c are unums or ubounds, it is *not* 
the case that you will always have (a+b)+c=a+(b+c), if you write the 
calculation that way. Like other kinds of interval arithmetic, ubounds obey 
sub-associativity, which says that the two sides of that equation are not 
nowhere equal, and that their intersection contains the exact answer.

The way you get associativity is by using a fused sum operation that 
internally accumulates sums with enough precision to restore associativity 
of the rounded end results. Here's a page from the book:

https://books.google.com/books?id=fZsXBgAAQBAJpg=PA164lpg=PA164dq=subassociative+interval+arithmeticsource=blots=MH0veKCw9Ysig=sq9atiYn46w2rvuT26E3fLbUnOghl=ensa=Xved=0CDYQ6AEwA2oVChMIi7Dyt6eDxwIVA24-Ch3gjg-7#v=onepageq=subassociative%20interval%20arithmeticf=false

Gustafson's whole proposal involves standardizing several layers of 
computation, including what happens in a higher precision scratchpad that 
is imagined to be in hardware and fenced off from the user. IEEE floating 
point arithmetic also works with a higher precision scratchpad, but exactly 
what happens there is a little bit under constrained, and has varied 
between different processors. Standardizing the scratchpad, and which fused 
operations will keep their operands there, seems to be an important part of 
the proposal.

I'm pretty interested in more discussion of the book, but this mailing list 
probably isn't the right place for a wide ranging discussion to happen. 
Does anyone have any advice about other appropriate forums?


Re: [julia-users] John L. Gustafson's UNUMs

2015-07-30 Thread Steven G. Johnson


On Thursday, July 30, 2015 at 11:08:50 AM UTC-4, Tom Breloff wrote:

 It's better in the sense that you have a reason to try it with a larger 
 type.  You know exactly how much precision you've lost, and so you can 
 decide to use up to 1024 bits for intermediate calculations if you need to. 


No, it's worse, because you will likely use much more precision than you 
need.  You don't know exactly how much precision you've lost, you have a 
(probably) grossly pessimistic estimate of how much precision you've lost.

Compared to that, performing the calculation in Float32, then Float64, then 
(rarely) Float128 (or better, rearrange your calculation to avoid the 
catastrophic loss of accuracy that is necessitating  double precision) 
until the answer stops changing to your desired tolerance is vastly more 
efficient.


[julia-users] ANN: StructsOfArrays.jl

2015-07-30 Thread Simon Kornblith
Yichao, Oscar, and I were unhappy with the current state of vectorization 
of operations involving complex numbers and other immutables so I decided 
to do something about it. I'm pleased to announce StructsOfArrays.jl 
https://github.com/simonster/StructsOfArrays.jl, which performs the Array 
of Structures - Structure of Arrays memory layout optimization without 
requiring code changes. This alternative memory layout permits SIMD 
optimizations for immutables for which such optimizations would not 
otherwise be possible or profitable, either because of limitations of the 
Julia codegen and LLVM optimizer or because of the type of the operations 
performed. The benchmark in the README shows that StructsOfArrays can give 
non-negligible speedups for simple operations involving arrays of complex 
numbers.

Simon


[julia-users] C function vs 64 bit arithmetic in julia

2015-07-30 Thread Forrest Curo
I want to turn an unsigne64 into bytes, chew on the bytes,  rearrange into
a new unsigned64.

Should I expect significant gain by reading it into a C function to make it
a union of char and unsigned64, take out the chars  put the new ones back
into that union --

or should it be close enough in speed to stay in julia,
with something like:

for i = 1:8
 bites[i] = x  255
 x = 8
end

[doing stuff to bites]

x = 0
for i = 1:8
 x += bites[i]
end
?


[julia-users] Re: C function vs 64 bit arithmetic in julia

2015-07-30 Thread Jeffrey Sarnoff
It has been my experience that, with an appropriate choice of data 
structure and straightforward lines of code, Julia is better.
The Julia realization will be fast enough .. for the operations you need 
2x-3x C, once the loop executes, and it is much less
hassle, and easier to maintain.  There are ways to do it wrong, and incur 
uneeded overhead.  
I defer to others to give you specific guidance.

On Thursday, July 30, 2015 at 1:40:34 PM UTC-4, Forrest Curo wrote:

 I want to turn an unsigne64 into bytes, chew on the bytes,  rearrange 
 into a new unsigned64.

 Should I expect significant gain by reading it into a C function to make 
 it a union of char and unsigned64, take out the chars  put the new ones 
 back into that union -- 

 or should it be close enough in speed to stay in julia,  
 with something like:

 for i = 1:8
  bites[i] = x  255
  x = 8
 end

 [doing stuff to bites]

 x = 0
 for i = 1:8
  x += bites[i]
 end
 ?



Re: [julia-users] ANN: StructsOfArrays.jl

2015-07-30 Thread Stefan Karpinski
The ease with which you were able to put that together is pretty amazing.

On Thu, Jul 30, 2015 at 1:38 PM, Simon Kornblith si...@simonster.com
wrote:

 Yichao, Oscar, and I were unhappy with the current state of vectorization
 of operations involving complex numbers and other immutables so I decided
 to do something about it. I'm pleased to announce StructsOfArrays.jl
 https://github.com/simonster/StructsOfArrays.jl, which performs the
 Array of Structures - Structure of Arrays memory layout optimization
 without requiring code changes. This alternative memory layout permits SIMD
 optimizations for immutables for which such optimizations would not
 otherwise be possible or profitable, either because of limitations of the
 Julia codegen and LLVM optimizer or because of the type of the operations
 performed. The benchmark in the README shows that StructsOfArrays can give
 non-negligible speedups for simple operations involving arrays of complex
 numbers.

 Simon



Re: [julia-users] In what version is Julia supposed to mature?

2015-07-30 Thread Eric Forgy
Great news! :)

On Wednesday, July 29, 2015 at 9:52:36 AM UTC+8, Stefan Karpinski wrote:

 That's literally the only part of that post that I would change :-)

 But no, I'm not trolling, 1.0 should be out next year. Predicting down to 
 the month – or even quarter – is hard, but that's what I think we're 
 looking at. I'll post a 1.0 roadmap issue soon.



[julia-users] Re: parallel threads broken, replacing module

2015-07-30 Thread lapeyre . math122a
Thanks Seth.

On Thursday, July 30, 2015 at 2:46:42 AM UTC+2, Seth wrote:

 Reference: https://github.com/JuliaLang/julia/issues/12381

 On Wednesday, July 29, 2015 at 5:35:14 PM UTC-7, Seth wrote:

 For what it's worth, I'm seeing the same thing:

 julia @everywhere using LightGraphs
 WARNING: replacing module LightGraphs
 WARNING: replacing module LightGraphs
 WARNING: replacing module LightGraphs
 WARNING: replacing module LightGraphs
 exception on 4:
 ...
 (Lots of error messages / backtraces removed)
 ...
 ERROR: ProcessExitedException()
  in wait at /usr/local/julia-latest/lib/julia/sys.dylib
  in sync_end at /usr/local/julia-latest/lib/julia/sys.dylib
  in anonymous at multi.jl:348



 On Wednesday, July 29, 2015 at 4:54:30 PM UTC-7, lapeyre@gmail.com 
 wrote:

 I can file a bug report. But, I'm not entirely sure what to write.

 Simpler case:

  julia -p 4
  @everywhere using ZChop (simple modules)

 The following version is from a completely fresh build today. I don't 
 get a crash immediately, but I do get
 WARNING: replacing module ZChop (4 times)
 Version 0.4.0-dev+6394 (2015-07-29 21:58 UTC)
 Commit a2a218b* (0 days old master)

 Five days ago, i did git pull and built from an older clone
 Version 0.4.0-dev+6202 (2015-07-24 01:47 UTC)
 Commit 53a7f2e* (5 days old master)
 This version prints the same warning and spews many errors including
 unknown function (ip: 0x7fe94d596ff7)
 ERROR (unhandled task failure): EOFError: read end of file

 The following version will load ZChop with no problem, but when loading 
 Distributions, it still prints many errors
 and fails.  It loads some other modules (i.e with @everywhere, as above),
 such as Roots with no apparent errors.
 Version 0.4.0-dev+3965 (2015-03-22 12:24 UTC)
 Commit e1f0310* (129 days old master)

 A fresh build today on another machine gives errors with @everywhere 
 using Roots.
 It fails to load Distributions even with no parallel threads, but this 
 appears to be unrelated.

 On the same (other) machine, the following version loads and runs 
 parallel code using the modules
 mentioned above, and more.
 Version 0.4.0-dev+4096 (2015-03-31 08:05 UTC)
 Commit a3c0743* (120 days old master)

 I see no error with 0.3.5-pre+121 (2015-01-07 00:19 UTC)


 On Wednesday, July 29, 2015 at 11:24:55 PM UTC+2, lapeyre@gmail.com 
 wrote:

 Parallel threads has stopped working for me. Any ideas ?
 Code using addprocs and @everywhere include has stopped working (is 
 broken) on one machine. Code that used to work now causes a number of 
 varying errors to be printed, crashes, runaway processes, etc. when it is 
 loaded. Both a recent v0.4 and a 2 month old version that did run the code 
 in the past cause the failure. The same code on a different machine 
 continues to run normally. Maybe my user Julia environment has a problem ? 
 Or maybe I upgraded a system library ?

 If I use just 2 processes, the code will sometimes  both load and run 
 correctly.

 A minimal example

 test.jl
 --
 using Distributions
 --

 julia -p 4
  @everywhere include(test.jl)

 WARNING: replacing module Distributions   ( 4 times )
 signal (11): Segmentation fault
 unknown function (ip: 0x7f235c7ffd98)
 jl_module_import at /usr/local/julia-0.4a/bin/../lib/julia/libjulia.so 
 (unknown line)

 etc.

 I am running unstable debian linux.

 Thanks, John