Re: initialize float4 (core.simd)

2019-10-06 Thread Bogdan via Digitalmars-d-learn

On Saturday, 21 September 2019 at 14:31:15 UTC, Stefan Koch wrote:

On Saturday, 21 September 2019 at 13:42:09 UTC, Bogdan wrote:

Well, this seems to be working:


float[4] doSimd(float[4] values, float delta)
{
  float4 v_delta = delta;

  float4 v_values = __simd(XMM.ADDPS,
   __simd(XMM.LODAPS, values[0]),
   v_delta);

  return [v_values[0], v_values[1],v_values[2],v_values[3]];
}


Not sure if it's correct though.


float4 x;
float x1, x2, x3, x4;
x[0] = x1;
x[1] = x2;
x[2] = x3;
x[3] = x4;


Thank you! That also seems to be working.
Turns out that using this method for drawing a rectangle is about 
twice as slow than a naive implementation, so I'm almost 
certainly doing something silly/wrong. :)


Re: initialize float4 (core.simd)

2019-10-06 Thread NaN via Digitalmars-d-learn

On Saturday, 21 September 2019 at 12:50:46 UTC, Bogdan wrote:
I'm trying to understand how to use the `core.simd` 
functionality, and I'm having trouble initializing a float4 
vector.


Here's my example code:

```
import std.stdio;
import core.simd;

void main()
{
  float[4] values = [1.0f, 2.0f, 3.0f, 4.0f];
  float delta = 15.0f;

  writeln(doSimd(values, delta));

}

float[4] doSimd(float[4] values, float delta)
{
  float4 v_delta = delta;
  float4 v_values = values;

  // ... do SIMD ...

  return [v_delta[0], v_delta[0],v_delta[0],v_delta[0]];
}
```

Compilation is failing with the following error:

```
source/app.d(16,21): Error: cannot implicitly convert 
expression values of type float[4] to __vector(float[4])

dmd failed with exit code 1.
```
How do you initialize a float4 with some useful data?


You should probably have a look at this...

https://github.com/AuburnSounds/intel-intrinsics




Re: initialize float4 (core.simd)

2019-10-06 Thread Bogdan via Digitalmars-d-learn

On Sunday, 6 October 2019 at 11:53:29 UTC, NaN wrote:


You should probably have a look at this...

https://github.com/AuburnSounds/intel-intrinsics


Thanks, that looks quite useful.
Also, it seems that I need to use either LDC or GDC instead of 
DMD. :)


Re: do mir modules run in parallell

2019-10-06 Thread David via Digitalmars-d-learn

On Sunday, 6 October 2019 at 05:32:34 UTC, 9il wrote:
mir-blas, mir-lapack, and lubeck parallelism depend on system 
BLAS/LAPACK library (OpenBLAS, Intel MKL, or Accelerate 
Framework for macos).


mir-optim by default single thread but can use TaskPool from D 
standard library as well as user-defined thread pools.


mir-random was created for multithread programs, check the 
documentation for a particular engine. The general idea is that 
each thread has its own engine.


Other libraries are single thread but can be used in 
multithread programs with Phobos threads or other thread 
libraries.


Best,
Ilya


thanks! I will try it out accordingly.
Bests, David


Re: Ranges to deal with corner cases and "random access"

2019-10-06 Thread Brett via Digitalmars-d-learn

Here is a sort of proof of concept:

struct CtsExtend
{
static auto opCall(T)(T a)
{

struct _CtsExtend
{
T _a;
auto opIndex(int i)
{
if (i < 0) return a[0];
if (i >= a.length) return a[$-1];
return a[i];
}
}

_CtsExtend x;
x._a = a;

return x;
}
}


Not sure if it can be simplified(I had to create a sub struct to 
get things to work, hopefully it would be optimized out or can be 
simplified. I tried originally to do it all using meta 
programming but I couldn't figure it out).


For any indexable it will override the index and modify it's 
behavior to constantly extend the ends.


CtsExtend(arr)[-4]

In general it would be nice to get this type of thing full 
featured(the various extensions, for it to be optimized, to work 
with ranges and other types of indexables that might allow 
negative indices, override the extension values, keep a history, 
etc...


If it can be done and make to work well with ranges it would 
allow many algorithms to be very easily expressed and make ranges 
more powerful.


Re: Blog Post #76: Nodes and Noodles, Part II

2019-10-06 Thread Zekereth via Digitalmars-d-learn

On Friday, 4 October 2019 at 11:36:52 UTC, Ron Tarrant wrote:
Here's the second instalment of the Nodes-n-noodles series 
wherein noodle drawing on a DrawingArea is now complete. You 
can find it here: 
http://localhost:4000/2019/10/04/0076-cairo-xi-noodles-and-mouse-clicks.html


Here's the correct URL 
https://gtkdcoding.com/2019/10/04/0076-app-01-iii-noodles-and-mouse-clicks.html


Great tutorial(s)! Thanks!


Re: Blog Post #76: Nodes and Noodles, Part II

2019-10-06 Thread Ron Tarrant via Digitalmars-d-learn

On Sunday, 6 October 2019 at 23:00:51 UTC, Zekereth wrote:

Here's the correct URL 
https://gtkdcoding.com/2019/10/04/0076-app-01-iii-noodles-and-mouse-clicks.html


Great tutorial(s)! Thanks!


LOL! Thanks, Zekereth.