Re: matrix library

2016-05-24 Thread Vlad Levenfeld via Digitalmars-d-announce

On Tuesday, 24 May 2016 at 05:52:03 UTC, Edwin van Leeuwen wrote:
You might be interested in joining the gitter channel where the 
mir developers hang out:

https://gitter.im/libmir/public


Thanks!


Re: matrix library

2016-05-23 Thread Vlad Levenfeld via Digitalmars-d-announce

On Monday, 23 May 2016 at 20:56:54 UTC, Edwin van Leeuwen wrote:
There is also mir, which is working towards being a full 
replacement for blas:

https://github.com/libmir/mir

It is still under development, but I think the goal is to 
become the ultimate matrix library :)


I am sorely tempted to use mir's MatrixView as the backend for 
the matrix slicing but don't know what else I might need from 
cblas, so maybe this will come later (especially when I figure 
out, or someone explains, what the proper resource/reference 
handling should be, especially in the case of small matrices 
backed by static arrays or something).


Now I am thinking that the best way to orthogonalize (sorry) my 
efforts with respect to mir and scid.linalg is to use them as 
backend drivers, maintain this wrapper for the crowd that isn't 
as familiar with blas/lapack, or wants to write slightly more 
concise top-level code, and forward the relevant bug reports and 
pull requests to mir and scid.


Re: matrix library

2016-05-23 Thread Vlad Levenfeld via Digitalmars-d-announce

On Monday, 23 May 2016 at 20:11:22 UTC, Vlad Levenfeld wrote:

...


On first glance it looks like 
https://github.com/DlangScience/scid/blob/master/source/scid/matrix.d has most of what my matrix implementation is missing. Not sure how to put them together yet.


Re: matrix library

2016-05-23 Thread Vlad Levenfeld via Digitalmars-d-announce

On Monday, 23 May 2016 at 18:10:40 UTC, Carl Vogel wrote:
How does what you're doing compare to what's in 
https://github.com/DlangScience/scid/blob/master/source/scid/linalg.d ?


Basically, I have made a matrix structure and wrapped some basic 
arithmetic, while scid.linalg provides functions wrapping some 
heavier tasks (inversion, determinant, etc).
There appears to be no functional overlap, and I think what I 
will do is contribute any actual linalg routines I write back to 
scid.linalg, and then import it as a dependency to this package.
The stuff in my lib right now is way less careful wrt resources 
than scid.linalg and wouldn't serve as a general-purpose matrix 
wrapper. Maybe in enough iterations it will converge on something 
relatively performant.


On Monday, 23 May 2016 at 18:10:40 UTC, Carl Vogel wrote:
Making the dims template/compile-time params is an interesting 
choice, but I wonder if it is unduly limiting.


Yeah, you are right. I mentioned dynamic-array-backed matrices as 
an efficiency thing in my first post but that would be a good way 
to solve this problem as well.


matrix library

2016-05-23 Thread Vlad Levenfeld via Digitalmars-d-announce

https://github.com/evenex/linalg

I've some heard people (including me) asking about matrix 
libraries for D, and while there is gl3n it only goes to 4x4 
matrices and was written before all the multidimensional indexing 
stuff.


So I was using gl3n for awhile until I needed some 6x6s and threw 
together a syntax-sugary sort of wrapper over 
std.experimental.ndslice and cblas for matrix math.


You can slice submatrices, assign to them, and perform ops on 
them with other matrices or 2-dimensional array slices... though, 
for implementation-ish reasons, ops involving 2-d arrays are 
elementwise (you'll have to call the Matrix constructor to use 
matrix multiplication again).


It was built in kind of an ad-hoc way and I will be adding stuff 
to it as the need arises, so there's nothing there yet beyond the 
bare basics and you should expect bugs. All the matrices hold 
static arrays because I don't want to mess with reference 
problems right now. A matrix past a certain size will be more 
efficient to store as a dynamic array, of course. But, right now, 
I need this to make writing linear algebra code comfortable for 
myself rather than try to win at benchmarks.


Bugs/Pull/Feature requests welcome.



Re: futures and related asynchronous combinators

2016-03-27 Thread Vlad Levenfeld via Digitalmars-d-announce

On Sunday, 27 March 2016 at 15:10:46 UTC, maik klein wrote:

On Sunday, 27 March 2016 at 07:16:53 UTC, Vlad Levenfeld wrote:

https://github.com/evenex/future/

I've been having to do a lot of complicated async work lately 
(sometimes multithreaded, sometimes not), and I decided to 
abstract a some patterns out and unify them with a little bit 
of formalism borrowed from functional languages. I've aimed to 
keep things as simple as possible while providing a full 
spread of functionality. This has worked well for me under a 
variety of use-cases, but YMMV of course.


[...]


What happens when you spawn a future inside a future and call 
await? Will the 'outer' future be rescheduled?


I think that you are asking about what happens if you call 
"async" from within another "async" call. If that's the case:


Short answer:

No rescheduling by default. The outer future is ready as soon as 
the inner future has been spawned. You would still have to await 
the inner future separately. If you are chaining "async" calls 
you will want to use "next" and/or "sync" to get the rescheduling 
behavior you want.


Long answer:

It depends on how you spawn the future.

A future is a passive thing. All it does, by itself, is signify a 
value yet to be computed. There is no constraint on how the 
future is to be fulfilled. You could create a future manually 
with "pending", in which case calling "await" on it would block 
forever (because the future never has "fulfill" called on it). 
What a function like "async!f" does is to create a future with an 
implied promise to fulfill that future from another thread.


Let's suppose "f", internally, calls "async" to return a 
Future!A. Then "async!f" spawns a Future!(Future!A). The outer 
future is ready once "f" returns, and the inner future will be 
ready once the async operation launched by "f" completes. To 
await the final result, you might call "await.result.await", 
which quickly becomes awkward. To avoid this awkwardness, you 
should use "sync" to flatten nested futures, or "next" to 
automatically flatten the futures as you chain them.


For example:

  async!f : Future!(Future!A)
  async!f.sync : Future!A
  async!({}).next!f : Future!A

If you're familiar with functional design patterns, Future is a 
monad with "next" as bind, "sync" as join, and 
"pending!A.fulfill(a)" as return.


If not, just remember - "sync" removes a layer of nesting, and 
"next" chains future calls without nesting them.


Hope that helped!


Re: futures and related asynchronous combinators

2016-03-27 Thread Vlad Levenfeld via Digitalmars-d-announce

On Sunday, 27 March 2016 at 08:16:22 UTC, Eugene Wissner wrote:

On Sunday, 27 March 2016 at 07:16:53 UTC, Vlad Levenfeld wrote:

https://github.com/evenex/future/

I've been having to do a lot of complicated async work lately 
(sometimes multithreaded, sometimes not), and I decided to 
abstract a some patterns out and unify them with a little bit 
of formalism borrowed from functional languages. I've aimed to 
keep things as simple as possible while providing a full 
spread of functionality. This has worked well for me under a 
variety of use-cases, but YMMV of course.


Anyway I've occasionally seen people on IRC asking about 
futures, so I thought I'd share and make this announcement.


This lib depends on another lib of mine (for tagged unions and 
related things) which might not appeal to some but if there is 
demand for futures sans dependencies I can always go back and 
manually inline some of the templates.


TL;DR:

  auto x = async!((y,z) => y + z)(1,2);
  x.await;
  assert(x.result.success = 3);


Hi Vlad,

Are you intend to open source other parts of your work?
Can I ask what are you using for your async stuff: libasync, 
vibe, asynchronous or something self written?


For things like timers, event loops and networking in my current 
project I am using libasync.


As for other parts of my work: I've been getting some good 
mileage out of a small set of generic primitives for operating on 
serial streams, I will probably pull them out and release them 
with some documentation soon.


futures and related asynchronous combinators

2016-03-27 Thread Vlad Levenfeld via Digitalmars-d-announce

https://github.com/evenex/future/

I've been having to do a lot of complicated async work lately 
(sometimes multithreaded, sometimes not), and I decided to 
abstract a some patterns out and unify them with a little bit of 
formalism borrowed from functional languages. I've aimed to keep 
things as simple as possible while providing a full spread of 
functionality. This has worked well for me under a variety of 
use-cases, but YMMV of course.


Anyway I've occasionally seen people on IRC asking about futures, 
so I thought I'd share and make this announcement.


This lib depends on another lib of mine (for tagged unions and 
related things) which might not appeal to some but if there is 
demand for futures sans dependencies I can always go back and 
manually inline some of the templates.


TL;DR:

  auto x = async!((y,z) => y + z)(1,2);
  x.await;
  assert(x.result.success = 3);


Re: N-dimensional slices is ready for comments!

2015-07-11 Thread Vlad Levenfeld via Digitalmars-d-announce

On Saturday, 20 June 2015 at 09:17:22 UTC, Ilya Yaroshenko wrote:

autodata is hard to understand without HTML documentation.
Automated documentation based on 
https://github.com/kiith-sa/harbored-mod

can be found at
http://ddocs.org/autodata/~master/index.html,  and it is empty. 
You may want to read http://dlang.org/ddoc.html


Regards,
Ilya


Thanks for the info. I have now documented the lib (and its 
dependencies) and pushed the updates, but the ddocs site seems to 
be down for me, so I can't see if the docs have made their way to 
the site.
In any case, you can take a look if you like, once the site is 
working again.


Re: N-dimensional slices is ready for comments!

2015-06-19 Thread Vlad Levenfeld via Digitalmars-d-announce

On Friday, 19 June 2015 at 21:43:59 UTC, Vlad Levenfeld wrote:

https://github.com/evenex/autodata

N-dimensional slicing, range ops (map, zip, repeat, cycle, etc) 
lifted to n-dimensions, n-dim specific ops like extrusion, 
n-dim to d-dim of n-1-dim, flattening for lexicographic 
traversal, support for non-integer indices. I posted this 
awhile ago but no one took notice. But if this is happening 
here now, feel free to crib anything that you think might look 
useful, as I'd hate to think all of this prior work went to 
waste.


and the dub package: http://code.dlang.org/packages/autodata


Re: N-dimensional slices is ready for comments!

2015-06-19 Thread Vlad Levenfeld via Digitalmars-d-announce

On Friday, 19 June 2015 at 10:13:42 UTC, Ilya Yaroshenko wrote:

On Friday, 19 June 2015 at 01:46:05 UTC, jmh530 wrote:

On Monday, 15 June 2015 at 08:40:31 UTC, Ilya Yaroshenko wrote:

Hi All,

PR and Examples: 
https://github.com/D-Programming-Language/phobos/pull/3397

DUB http://code.dlang.org/packages/dip80-ndslice

N-dimensional slices is real world example where `static 
foreach` would be useful.

Corresponding lines was marked with //TODO: static foreach

Best regards,
Ilya


The operator overloading and slicing mechanics look great, but 
I'm probably more excited about the future work you have 
listed.


Some thoughts:
The top line of ndslice.d says it is for "creating 
n-dimensional random access ranges". I was able to get the 
example for operator overloading working for dynamic arrays, 
but it doesn't seem to work for static. Hopefully this work 
can be extended. In addition, hopefully the future work on 
foreach byElement will be able to work on static arrays in 
addition to dynamic.




You can slice fixed size arrays:


auto myFun()
{
 float[4096] data;
 auto tensor = data[].sliced(256, 16);
 ///use tensor
}

My second point seems to be related to a discussion on the 
github page about accessing N-dimensional arrays by index. 
Basically there are some circumstances where it is convenient 
to loop by index on an N-dimensional array.




Denis had the same concept already implemented in his `unstd` 
library.

So, ndslice is going to have it too.


Finally, I have been trying to do something like
auto A = 4.iota.sliced(2, 2).array;
auto B = to!(float[][])(A);
without any luck. Seems to work though for one-dimensional 
arraays. I think instead you have to do something like

auto A = iota(0.0f, 4.0f, 1).sliced(2, 2).array;


Thanks!
I will add this kind of functionality:

auto A = 4.iota.sliced(2, 2);
auto B = cast(float[][]) A;

import std.conv;
auto C = A.to!(float[][]); //calls opCast


https://github.com/evenex/autodata

N-dimensional slicing, range ops (map, zip, repeat, cycle, etc) 
lifted to n-dimensions, n-dim specific ops like extrusion, n-dim 
to d-dim of n-1-dim, flattening for lexicographic traversal, 
support for non-integer indices. I posted this awhile ago but no 
one took notice. But if this is happening here now, feel free to 
crib anything that you think might look useful, as I'd hate to 
think all of this prior work went to waste.


Re: This Week in D #19: Reggae, ranges, and DConf Wed. Afternoon summary

2015-06-08 Thread Vlad Levenfeld via Digitalmars-d-announce

On Tuesday, 9 June 2015 at 02:27:26 UTC, Meta wrote:
first-class tuples and pattern matching / destructuring are 
very important quality of life issues and a great selling point 
for D. Even going with what we currently have for template 
type-matching, but for values, would be a great step in this 
direction.


+1


Re: This week in D #13: =void tip, ddmd, if(arr) warn, dconf registration

2015-04-12 Thread Vlad Levenfeld via Digitalmars-d-announce

On Monday, 13 April 2015 at 03:37:17 UTC, Adam D. Ruppe wrote:

http://arsdnet.net/this-week-in-d/apr-12.html

http://www.reddit.com/r/d_language/comments/32ek17/this_week_in_d_13_void_tip_ddmd_ifarr_warn_dconf/

https://twitter.com/adamdruppe/status/587459000729473024


Did not know about void initialization, that was really helpful.


Re: D idioms list

2015-01-14 Thread Vlad Levenfeld via Digitalmars-d-announce
For optimal AA lookup, this idiom is also nice if you only need 
the result for one line:


  if (auto found = key in AA)
do_stuff (found);