DMD failed with exit code -1073741819

2022-05-03 Thread jmh530 via Digitalmars-d-learn
I made some changes to some code I'm working on and now there are 
some lines that are giving me funky DMD error codes (I can tell 
it is some lines because I comment them out and the errors go 
away). So for instance, one line I have a static assert that 
gives an error code -1073741819, but if I split it up into two 
pieces (so that part of it is assigned to a value and then I use 
typeof that value in the static assert), then DMD does not 
complain. Does anyone have any idea what causes these types of 
errors?


I was leaning towards it being something related to running out 
of memory or something, but I'm using dub and I've tried turning 
on and off "lowmem".


I also have used -v both in dflags and at the command line and 
haven't noticed any obvious errors.


Re: DMD failed with exit code -1073741819

2022-05-03 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 3 May 2022 at 19:03:56 UTC, Dennis wrote:

On Tuesday, 3 May 2022 at 18:22:49 UTC, jmh530 wrote:

Does anyone have any idea what causes these types of errors?


Sounds like a stack overflow, maybe your code has a 
complex/recursive part that makes DMD's call stack very deep.


Thanks. I think this was it. I figured it out, but it took a bit 
of time to identify where the problem was coming from...


Basically, I started out with a template like
```d
template foo(T, U u = val, V, W) {}
```
and refactored it to
```d
template foo(T, U u = val, V, W, U y = u) {}
```
which is when I started getting the problem, but I had changed a 
bunch of other stuff to introduce `y`, so it wasn't entirely 
clear why that would cause the problems.

Anyway, changing it to
```d
template foo(T, U u = val, V, W, U y = val) {}
```
made the problem went away.


Compile delegate with enum into proper function?

2022-05-07 Thread jmh530 via Digitalmars-d-learn
In the code below, there is a two parameter function `foo` and an 
override of it with only one parameter. In the override case, I 
force the second one to be 1, but ideally there should be a way 
to specify it at compile-time.


It would be kind of nice to be able to do it with an enum and a 
delegate or something, perhaps like `foo2`. However, that is 
still generating a delegate. Something like `foo3` also works, 
but I can't create that within a main function like I can for the 
delegate.


I suppose the question is why can't I tell the compiler to 
compile a delegate into a proper function? I suppose this also 
holds for a function pointer. The difference I suppose is that 
the delegate with enums isn't really taking advantage of the 
features of a delegate, at least as far as I can tell.


```d
int foo(int x, int a) {
return x + a;
}

int foo(int x) {
return x + 1;
}

enum int a = 1;
auto foo2 = (int x) => {foo(x, a)};
int foo3(int x) {
return x + a;
}
```


Re: Compile delegate with enum into proper function?

2022-05-07 Thread jmh530 via Digitalmars-d-learn

On Saturday, 7 May 2022 at 18:46:03 UTC, Paul Backus wrote:

On Saturday, 7 May 2022 at 18:36:40 UTC, jmh530 wrote:
In the code below, there is a two parameter function `foo` and 
an override of it with only one parameter. In the override 
case, I force the second one to be 1, but ideally there should 
be a way to specify it at compile-time.


Have you tried [`std.functional.partial`][1]? Using it, your 
example could be written like this:


```d
import std.functional: partial;

enum int a = 1;
alias foo2 = partial!(foo, a);
```

[snip]


Thanks. This is basically equivalent to

```d
int foo(int a)(int x) { return x + a; }
alias foo2 = foo!a;
```

The downside is that you wouldn't be able to `alias foo = foo!a`. 
Another approach would be to do something like


```d
int foo(int b = a)(int x) { return x + b; }
```

so that the default case could be handled.


Re: Compile delegate with enum into proper function?

2022-05-07 Thread jmh530 via Digitalmars-d-learn

On Saturday, 7 May 2022 at 23:30:37 UTC, Paul Backus wrote:

[snip]

Worth noting that you *can* write

```d
alias foo = partial!(foo, a);
```

...which will add the partially-applied version to `foo`'s 
overload set.


You sure about that? Below fails to compile on godbolt with ldc 
1.27.1 [1]. For some reason run.dlang.org is just hanging...


```d
import core.stdc.stdio: printf;
import std: partial;

int foo(int x, int a) {
return x + a;
}
enum int a = 2;

alias foo = partial!(foo, a);

void main() {
int x = 2;
int y = foo(x, a);
printf("the value of y is %i", y);
auto z = foo(x);
printf("the value of z is %i", z);
}
```

[1] https://d.godbolt.org/z/dx8aWfjYW


Re: Compile delegate with enum into proper function?

2022-05-08 Thread jmh530 via Digitalmars-d-learn

On Sunday, 8 May 2022 at 03:58:06 UTC, Tejas wrote:

[snip]

If there is only one possible value for the  overload, is there 
an issue with using default arguments?

[snip]


Default arguments are intended to be resolved at runtime. That 
is, if you compile a function with two parameters and one of them 
has a default, then the compiler will compile one function that 
has two parameters as inputs.


However, since `foo` above is a relatively simple function, if 
you compile with -O, then it gets inlined. It doesn't matter so 
much that whether `a` is an enum or a literal since the compiler 
knows what it is at compile time and will inline it to remove the 
call to `foo` anyway.


I am interested in more complex cases where the compiler isn't 
able to inline the function and where the behavior of the second 
parameter might be more significant. The default parameter would 
then be doing the calculation at runtime when ideally may be 
known at compile-time and the compiler could generate a separate 
function that is simpler taking only one parameter.




Re: Including C sources in a DUB project

2022-05-10 Thread jmh530 via Digitalmars-d-learn

On Monday, 9 May 2022 at 09:17:06 UTC, Alexander Zhirov wrote:

[snip]


It would be nice if dub included a directory of example 
configurations for common issues like this.


Re: Including C sources in a DUB project

2022-05-10 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 10 May 2022 at 19:13:21 UTC, Dennis wrote:

[snip]

It has an example directory: 
https://github.com/dlang/dub/tree/master/examples


If your configuration is missing, you could make a Pull Request 
to add it.


So it does. Thanks.

We might also link to that on the dub.pm website.


Re: Including C sources in a DUB project

2022-05-10 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 10 May 2022 at 20:50:12 UTC, Alexander Zhirov wrote:

On Tuesday, 10 May 2022 at 19:13:21 UTC, Dennis wrote:
It has an example directory: 
https://github.com/dlang/dub/tree/master/examples


And if there are two compilers in the system - `dmd` and `ldc`, 
which compiler chooses `dub.json`? And how do I specify the 
specific compiler I want?


Following the command line documentation [1], you can use the 
--compiler option to select which to use. You can make the 
dub.json do different things depending on the target and/or 
compiler, but the default works fine regardless if you're doing 
something simple.


https://dub.pm/commandline.html


Re: What are (were) the most difficult parts of D?

2022-05-11 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 11 May 2022 at 09:06:52 UTC, bauss wrote:

On Wednesday, 11 May 2022 at 05:41:35 UTC, Ali Çehreli wrote:
What are you stuck at? What was the most difficult features to 
understand? etc.


To make it more meaningful, what is your experience with other 
languages?


Ali


dip1000


Ha, I just mentioned this on another thread...
https://dlang.org/spec/function.html#ref-return-scope-parameters


Re: What are (were) the most difficult parts of D?

2022-05-12 Thread jmh530 via Digitalmars-d-learn

On Thursday, 12 May 2022 at 12:13:32 UTC, Basile B. wrote:

[snip]
```
is ( Type : TypeSpecialization , TemplateParameterList )
is ( Type == TypeSpecialization , TemplateParameterList )
is ( Type Identifier : TypeSpecialization , 
TemplateParameterList )
is ( Type Identifier == TypeSpecialization , 
TemplateParameterList )

```

I never remember those variants, because basically you never 
need them...

They were required for std.traits and that's it.


What's the difference between a Type and Type Identifier?


Re: What are (were) the most difficult parts of D?

2022-05-12 Thread jmh530 via Digitalmars-d-learn

On Thursday, 12 May 2022 at 15:32:24 UTC, Adam D Ruppe wrote:

On Thursday, 12 May 2022 at 15:18:34 UTC, jmh530 wrote:

What's the difference between a Type and Type Identifier?


The is expression roughly follows variable declaration style.

You write

int a;

to declare a new symbol named `a` of type `int`.

Similarly,

static if(is(T a))

declares a new symbol of type `a` if `T` is a valid type. (Of 
course, in this case, a and T are aliases of each other, so it 
isn't super useful, but in the more complex matches using the 
== and : operators, it might be different.)


The Identifier is optional.


Ah, yeah I know about that.


Implicit integer conversions Before Preconditions

2022-05-24 Thread jmh530 via Digitalmars-d-learn
In the code below, `x` and `y` are implicitly converted to `uint` 
and `ushort` before the function preconditions are run.


Is there any way to change this behavior? It feels unintuitive 
and I can't find in the spec where it says when the conversions 
in this case occur, but it clearly happens before the 
preconditions are run.


```d
import std.stdio: writeln;

void foo(uint x)
in (x >= 0)
{
writeln(x);
}

void foo(ushort x)
in (x >= 0)
{
writeln(x);
}

void main() {
int x = -1;
foo(x); //prints 4294967295
short y = -1;
foo(y); //prints 65535
}
```


Re: Implicit integer conversions Before Preconditions

2022-05-24 Thread jmh530 via Digitalmars-d-learn
On Tuesday, 24 May 2022 at 21:05:00 UTC, Steven Schveighoffer 
wrote:

[snip]

```d
// e.g.
foo(int x)
in (x >= 0)
{
   return foo(uint(x));
}
```

And remove those useless `in` conditions on the unsigned 
versions, an unsigned variable is always >= 0.


-Steve


Thanks. That makes perfect sense. I just got thrown by the 
preconditions not running first.


importC and cmake

2022-09-06 Thread jmh530 via Digitalmars-d-learn
I was thinking about trying out importC with a library I have 
used in the past (it's been a few years since I used it with D). 
The library uses cmake to generate static or dynamic libraries (I 
believe I did static with Windows and dynamic with Linux, but I 
can't really recall).


My understanding of cmake is that it is used to generate the 
files needed to build something in a cross-platform kind of way 
(so make files for linux, project files for Visual studio, etc.). 
This doesn't seem consistent with how importC works (which would 
be using a D compiler to compile the project). I suppose I could 
just try it and see if it works, but since the project uses cmake 
I wonder if there aren't potentially things that cmake is doing 
that are important and could get missed in this naive sort of 
approach (for instance, it has a way to use algorithms written in 
C++ by default, though they can be disabled in the cmake file).


Does anyone have any importC experience with libraries that are 
built with a tool like cmake that could help make this clearer?


Re: importC and cmake

2022-09-07 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 7 September 2022 at 00:31:53 UTC, zjh wrote:

On Tuesday, 6 September 2022 at 19:44:23 UTC, jmh530 wrote:


.


`xmake` is simpler.


Ok...but I didn't write the library so I can't exactly tell them 
to use xmake when they already use cmake.


Re: Function attribute best practices

2022-09-13 Thread jmh530 via Digitalmars-d-learn

On Monday, 12 September 2022 at 16:39:14 UTC, Paul Backus wrote:

[snip]

Yes. Except for `@trusted`, explicit attributes on template 
code are a smell.


[snip]


If I can be 100% sure that something will always be 
@safe/nothrow/pure/@nogc, then I might consider marking them as 
such. For instance, a function that takes any floating point 
type, does some calculation, and then returns it. I figure it is 
documented for the user and at least this will save the compiler 
the effort of figuring it. If I can't, then I don't.


C function taking two function pointers that share calculation

2022-09-14 Thread jmh530 via Digitalmars-d-learn
There is a C library I sometimes use that has a function that 
takes two function pointers. However, there are some calculations 
that are shared between the two functions that would get pointed 
to. I am hoping to only need to do these calculations once.


The code below sketches out the general idea of what I've tried 
so far. The function `f` handles both of the calculations that 
would be needed, returning a struct. Functions `gx` and `gy` can 
return the field of the struct that is relevant. Both of them 
could then get fed into the C function as function pointers.


My concern is that `f` would then get called twice, whereas that 
wouldn't be the case in a simpler implementation (`gx_simple`, 
`gy_simple`). ldc will optimize the issue away in this simple 
example, but I worry that might not generally be the case.


How do I ensure that the commonCalculation is only done once?

```d
struct Foo
{
int x;
int y;
}

Foo f(int x, int a)
{
int commonCalculation = a * x;
return Foo(commonCalculation * x, 2 * commonCalculation);
}

int gx(int x, int a) { return f(x, a).x;}
int gy(int x, int a) { return f(x, a).y;}

//int gx_simple(int x, int a) { return a * x * x;}
//int gy_simple(int x, int a) { return 2 * a * x;}

void main() {
import core.stdc.stdio: printf;
printf("the value of x is %i\n", gx(3, 2));
printf("the value of y is %i\n", gy(3, 2));
}
```


Re: C function taking two function pointers that share calculation

2022-09-14 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 14 September 2022 at 18:02:07 UTC, JG wrote:

[snip]

Maybe others know better but I would have thought the only way 
is to use globals to do this. Often c libraries that I have 
used get round this by taking a function and a pointer and then 
the library calls your function on the pointer simulating a d 
delegate.


The C function does make use of a pointer to some data (that I 
neglected to mention), but your comment gives me an idea. Thanks.


Re: plot api

2022-09-14 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 14 September 2022 at 19:34:56 UTC, Alain De Vos 
wrote:

Let's say i want to plot the function f(x)=sin(x)/x.
Which API would you advice, in order for me to not re-invent 
the wheel.


Have you tried ggplotd?
https://code.dlang.org/packages/ggplotd


Re: Best way to read CSV data file into Mir (2d array) ndslice?

2022-09-21 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 21 September 2022 at 05:31:48 UTC, mw wrote:

Hi,

I'm just wondering what is the best way to read CSV data file 
into Mir (2d array) ndslice? Esp. if it can parse date into 
int/float.


I searched a bit, but can't find any example.


Thanks.


It probably can't hurt to try the simplest approach first. 
`std.csv` can return an input range that you can then use to 
create a ndslice. Offhand, I don't know what D tools are an 
alternative to `std.csv` for reading CSVs.


ndslice assumes homogenous data, but you can put the Dates (as 
Date types) as part of the labels (as Data Frames). However, 
there's a bit to be desired in terms of getting that 
functionality integrated into the rest of the package [1].


[1] https://github.com/libmir/mir-algorithm/issues/426


Re: Best way to read CSV data file into Mir (2d array) ndslice?

2022-09-21 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 21 September 2022 at 13:08:14 UTC, jmh530 wrote:

On Wednesday, 21 September 2022 at 05:31:48 UTC, mw wrote:

Hi,

I'm just wondering what is the best way to read CSV data file 
into Mir (2d array) ndslice? Esp. if it can parse date into 
int/float.


I searched a bit, but can't find any example.


Thanks.


It probably can't hurt to try the simplest approach first. 
`std.csv` can return an input range that you can then use to 
create a ndslice. Offhand, I don't know what D tools are an 
alternative to `std.csv` for reading CSVs.


ndslice assumes homogenous data, but you can put the Dates (as 
Date types) as part of the labels (as Data Frames). However, 
there's a bit to be desired in terms of getting that 
functionality integrated into the rest of the package [1].


[1] https://github.com/libmir/mir-algorithm/issues/426


I just tried doing it with `std.csv`, but my version was a bit 
awkward since it doesn't seem quite so straightforward to just 
take the result of csvReader and put it in a array. I had to read 
it in there. I also wanted to allocate the array up front, but to 
do that I needed to know how big it was and ended up doing two 
passes on reading the data, which isn't ideal.


```d
import std.csv;
import std.stdio: writeln;
import mir.ndslice.allocation: slice;

void main() {
string text = 
"date,x1,x2\n1/31/2010,65,2.5\n2/28/2010,123,7.5";

auto records_firstpass = text.csvReader!double(["x1","x2"]);
auto records_secondpass = text.csvReader!double(["x1","x2"]);
size_t len = 0;
foreach (record; records_firstpass) {
len++;
}
auto data = slice!double(len, 2);
size_t i = 0;
size_t j;
foreach (record; records_secondpass)
{
j = 0;
foreach (r; record) {
data[i, j] = r;
j++;
}
i++;
}
writeln(data);
}
```


Re: csvReader: how to read only selected columns while the class Layout has extra field?

2022-10-03 Thread jmh530 via Digitalmars-d-learn

On Sunday, 2 October 2022 at 21:18:43 UTC, mw wrote:

[snipping]

A CSV library should consider all the use cases, and allow 
users to ignore certain fields.


In R, you have to force `NULL` for `colClasses` for the other 
columns. In other words, the user has to know the number of 
columns of the csv file in order to be able to skip them.

https://stackoverflow.com/questions/29332713/how-to-skip-column-when-reading-csv-file


Re: library to solve the system of linear equations

2022-10-17 Thread jmh530 via Digitalmars-d-learn

On Monday, 17 October 2022 at 19:54:12 UTC, Yura wrote:
Dear All, Thank you so much for your replies and hints! I got 
it working today. All the libraries are properly linked and the 
Equation solver runs smoothly.


The compilers turned out to be problematic though. The "Mir" 
library does not work with the Ubuntu 18.04 gdc and ldc 
compilers. I have managed to install the latest version dmd, 
and it works. But I suspect that the dmd compiler is not 
optimal in terms of performance. The question becomes whether 
it is possible to install the most recent ldc and gdc compilers 
on Ubuntu 18.04?


[snip]


If you have a problem with support for mir, submit a bug report. 
I don't think gdc is supported, but ldc should be.


Re: Catching C errors

2022-10-20 Thread jmh530 via Digitalmars-d-learn
On Thursday, 20 October 2022 at 11:59:45 UTC, data pulverizer 
wrote:

[snip]
Mine is also private for now till it reaches an acceptable 
state when I'll think about whether it should be publicly 
released or jealously guarded. It's a project I'm building for 
my own use really.


It can't hurt to publicly release (when ready) so that other 
users work through any kinks.


Re: Hipreme's #4 Tip of the day - Don't use package.d

2022-11-04 Thread jmh530 via Digitalmars-d-learn

On Friday, 4 November 2022 at 16:56:59 UTC, Hipreme wrote:

[snip]

You can use any name instead. The only difference between an 
ordinary source file and a package.d is the module name. For 
instance, if you're inside the filesystem directory, you can 
change the name to literally anything and import instead. To 
make my engine's names unique I have been using a convention 
for the package.d names as an abbreviation of the directory 
name plus `definitions` or something like that.


If you don't plan to use private(package_name), then I don't know 
what the point of it is.


Re: Hipreme's #4 Tip of the day - Don't use package.d

2022-11-04 Thread jmh530 via Digitalmars-d-learn

On Friday, 4 November 2022 at 19:17:04 UTC, Adam D Ruppe wrote:

On Friday, 4 November 2022 at 19:10:33 UTC, jmh530 wrote:
If you don't plan to use private(package_name), then I don't 
know what the point of it is.


This works fine without the package.d anyway.


Oh really, then what's the point of package.d?


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread jmh530 via Digitalmars-d-learn

On Sunday, 4 December 2022 at 23:25:34 UTC, Adam D Ruppe wrote:

On Sunday, 4 December 2022 at 22:46:52 UTC, Ali Çehreli wrote:

That's way beyond my pay grade. Explain please. :)


The reason that the GC stops threads right now is to ensure 
that something doesn't change in the middle of its analysis.


[snip]


That's a great explanation. Thanks.


Re: Solving optimization problems with D

2023-01-03 Thread jmh530 via Digitalmars-d-learn

On Sunday, 1 January 2023 at 22:00:29 UTC, max haughton wrote:

On Sunday, 1 January 2023 at 21:11:06 UTC, Ogi wrote:
I’ve read this [series if 
articles](https://www.gamedeveloper.com/design/decision-modeling-and-optimization-in-game-design-part-1-introduction) about using Excel Solver for all kinds of optimization problems. This is very neat, but of course, I would prefer to write models with code instead, preferably in D. I glanced at mir-optim but it requires knowledge of advanced math. Is there something more approachable for a layperson?


What do you want to optimize? Optimization in general requires 
reasonably advanced mathematics, whereas a single problem can 
be simplified.


There are a lot of C libraries too that you can call from D for 
optimization too, but like Max says knowing what you want to 
optimize helps a lot. Excel’s optimizer works for small problems 
but chokes if the dimension increases too much.It is probably 
some sort of nonlinear gradient free solver.


Re: Assign to Array Column

2023-02-01 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 1 February 2023 at 13:14:47 UTC, Siarhei Siamashka 
wrote:

On Tuesday, 31 January 2023 at 01:04:41 UTC, Paul wrote:

Greetings,

for an array byte[3][3] myArr, I can code myArr[0] = 5 and 
have:

5,5,5
0,0,0
0,0,0

Can I perform a similar assignment to the column?  This, 
myArr[][0] = 5, doesn't work.


This works fine for small arrays, but for large arrays such 
access pattern is cache unfriendly. It's usually best to 
redesign the code to avoid assignments to columns if possible 
(for example, by working with a transposed array). The language 
is not providing a convenient shortcut for something that is 
usually undesirable and expensive. And I think that this is 
actually reasonable.


If the code is slow, then profile and try to speed up parts that 
need it. The slowness may be due to a problem like this, but not 
always.


The OP could also try mir's slices.

```d
/+dub.sdl:
dependency "mir-algorithm" version="*"
+/

import mir.ndslice.fuse;
import std.stdio: writeln;

void main() {
auto x = [[0, 0, 0], [0, 0, 0]].fuse;
x[0, 0 .. $] = 5;
x[0  .. $, 1] = 5;
writeln(x);
}
```


Combining Templates While Minimizing Bloat

2023-02-14 Thread jmh530 via Digitalmars-d-learn
The code below has two `foo` functions that take slices, one 
accepts a const(T)* iterator and one accepts a generic Iterator 
with the property that the slice isn't convertible to the first 
one. The nice thing about this is that if you pass it with a 
double* or const(double)*, then it doesn't increase template 
bloat. The problem, however, is that I have to either have two 
implementations or a separate `fooImpl` function to implement the 
desired functionality. Is there any way to combine together the 
`foo` functions in a way that maintains the template bloat 
reducing behavior of the first function?


The example below uses mir, but it just as easily could be 
adapted to other types.


```d
/+dub.sdl:
dependency "mir-algorithm" version="*"
+/
import std.stdio: writeln;
import mir.ndslice.slice;

void foo(T)(Slice!(const(T)*, 1) x)
{
writeln("here");
writeln("Iterator = ", typeid(IteratorOf!(typeof(x;
}

void foo(Iterator)(Slice!(Iterator, 1) x)
if (!is(Iterator : const(U)*, U))
{
writeln("there");
writeln("Iterator = ", typeid(IteratorOf!(typeof(x;
}

void main()
{
double[] x = [1.0, 2, 3];
auto y = x.sliced;
auto z = y.toConst;
foo(y); //prints "here" and "Iterator=const(double)*"
foo(z); //prints "here" and "Iterator=const(double)*"
auto a = y / 6;
foo(a); //prints "there" and "Iterator=(some long template 
text)"

}
```

I tried something like
```d
void foo(Iterator)(Slice!(const Iterator, 1) x) {}
```
but this isn't a valid mir iterator since it becomes 
const(double*) (a const pointer to a const double). What I need 
is const(double)* (a mutable pointer to a const double).


dub.selections.json & optional dependencies: How's it work?

2023-02-24 Thread jmh530 via Digitalmars-d-learn
I'm looking at the dub package format [1] about optional 
dependencies and it says:


"With this set to true, the dependency will only be used if 
explicitly selected in dub.selections.json. If omitted, this 
attribute defaults to false."


And it occurs to me that I don't know anything about how 
dub.selections.json works.


I would think dub optional dependencies work such that if there 
are no functions being called/compiled that import an optional 
dependency, then the dependency wouldn't be included. How is it 
different from `dmd -i`?


The dub.selections.json in some of my projects look like a 
listing of the dependencies and their versions. Above should 
imply that the optional dependency would only get included there 
if the import is actually used somewhere in the project. Is that 
correct?


[1] https://dub.pm/package-format-json.html


Re: dub.selections.json & optional dependencies: How's it work?

2023-02-24 Thread jmh530 via Digitalmars-d-learn
On Friday, 24 February 2023 at 19:37:41 UTC, Steven Schveighoffer 
wrote:

On 2/24/23 2:01 PM, jmh530 wrote:
I'm looking at the dub package format [1] about optional 
dependencies and it says:


"With this set to true, the dependency will only be used if 
explicitly selected in dub.selections.json. If omitted, this 
attribute defaults to false."


And it occurs to me that I don't know anything about how 
dub.selections.json works.


Let's say you have dependency A, which has an *optional* 
dependency on B.


In your project, if you depend on A, but don't explicitly 
depend on B, then A will be built without B as a dependency.


If you have a dependency on A *and* B, then B's optional 
dependency will kick in, and participate in the selection of 
it's version.


So for instance, you could depend on B version 1 or higher, and 
the optional dependency could be on v1.5 or higher, it will 
select the highest version of 1.5


dub.selections.json only applies to the primary project. So the 
decision on which versions of which dependencies to use is 
decided by the whole tree.


-Steve


Ok this makes sense. So it's all about dependencies downstream. 
For instance, if I list B as a dependency in my project but never 
use it in my own project, it will still compile in (so for 
instance if A's functionality differs depending on if B is 
included, then I will get that functionality even if I don't use 
B directly in my project).


Preventing the Compiler from Optimizing Away Benchmarks

2023-03-13 Thread jmh530 via Digitalmars-d-learn
I was looking at [1] for ways to prevent the compiler from 
optimizing away code when trying to benchmark.


It has the following C++ code as a simpler version:
```
inline BENCHMARK_ALWAYS_INLINE void DoNotOptimize(Tp& value) {
asm volatile("" : "+r,m"(value) : : "memory");
}
```

I made an attempt to make a D version of it, but failed. 
Apparently DMD doesn't like the `""` in the first part of the asm 
instruction. I'm also not sure the `volatileLoad` command is 
right, but I didn't know of any other way to have volatile work 
in D (and I couldn't figure out how it actually worked from 
looking at the code).


```
void DoNotOptimize(T)(T* ptr)
{
import core.volatile: volatileLoad;
T value = volatileLoad(ptr);
asm {"" : "+r,m"(value) : : "memory";}
}
```

[1] 
https://theunixzoo.co.uk/blog/2021-10-14-preventing-optimisations.html


Re: Preventing the Compiler from Optimizing Away Benchmarks

2023-03-13 Thread jmh530 via Digitalmars-d-learn

On Monday, 13 March 2023 at 15:23:25 UTC, user1234 wrote:

[snip]


[1] 
https://theunixzoo.co.uk/blog/2021-10-14-preventing-optimisations.html


that's illegal code. You mix GCC/LLVM syntax with D asm block 
and the front-end wont recognize that.


LDC recognizes a syntax similar to what is described in your 
link, see 
https://wiki.dlang.org/LDC_inline_assembly_expressions. GDC has 
it too (since that the syntax invented by GCC in first place)  
but I cant find the documentation ATM.


Thanks, that helps. Below seems to be working...(with LDC and -O) 
when I include the DoNotOptimize, it takes around 300-500us, but 
when I comment it out, then it takes about 5us. It would still 
take some work to figure out how to get it to work with DMD.


```d
void DoNotOptimize(T)(T* ptr)
{
import ldc.llvmasm;
import core.volatile: volatileLoad;
T value = volatileLoad(ptr);

__asm("", "*mr,~{memory}", &value, );
}

void main() {
import std.algorithm.iteration: sum;
import std.array: uninitializedArray;
import std.datetime.stopwatch;
import std.random: uniform;
import std.stdio: writeln;

auto testData = uninitializedArray!(long[])(600_000);
foreach(ref el; testData) el = uniform(0, 10);

ulong seed = 0;
ulong output = 0;
StopWatch sw;
sw.start();

DoNotOptimize(&seed);
output = testData.sum(seed);
DoNotOptimize(&output);
sw.stop();
writeln("time: ", sw.peek);
}
```


Re: Any working REPL program on windows? DREPL doesn't compile

2023-03-23 Thread jmh530 via Digitalmars-d-learn

On Thursday, 23 March 2023 at 11:46:48 UTC, matheus wrote:

On Thursday, 23 March 2023 at 09:39:40 UTC, John Xu wrote:
Anybody know any working REPL program? I failed to find a 
working one.


https://github.com/dlang-community/drepl
can't compile on my Windows 10, dub reports:
...


According to their Readme:


Supported OS


Works on any OS with full shared library support by DMD 
(currently linux, OSX, and FreeBSD).


Matheus.


OP, would you be able to try using drepl with Windows Subsystem 
for Linux (WSL) and reporting back?


There's a lot of value in getting drepl working on Windows (I 
believe DMD shared library support needs to be improved, not sure 
also whether drepl would work with other D compilers) and in good 
enough shape to be used with Jupyter. Unfortunately, it's beyond 
my skills set to contribute to this effort and the only way 
things like this get completed is if people are willing and able 
to work on them.


That being said, I also get a lot of use out of run.dlang.io for 
trying out small ideas. It's not a Jupyter replacement, but the 
way it is working now significantly reduces my need for something 
like Jupyter. What it can't handle is if you have some project 
locally that you want to import. So long as you're just trying 
something out that uses a few well known set of D projects, it 
works well.


Re: Best way to use C library

2023-05-19 Thread jmh530 via Digitalmars-d-learn

On Friday, 19 May 2023 at 18:31:45 UTC, Maximilian Naderer wrote:

Hello guys,

So what’s currently the best way to use a big C library?

Let’s assume something like

cglm
assimp
glfw

ImportC doesn’t really work for such huge libraries, I’ll 
investigate further. Deimos is outdated or there are no 
bindings. I know that there is a dub package for glfw which 
works fine. But how would I do something for assimp or cglm. 
The dub assimp package is quite outdated.


Am I stuck with manually creating interface files either by 
hand or automation?


I’m hope somebody could give me some insights. Thank you !

Kind regards from Austria
Max


If there are issues using those libraries, you should report the 
bugs.


Re: std.sumtyp and option ?

2023-06-29 Thread jmh530 via Digitalmars-d-learn

On Thursday, 29 June 2023 at 14:18:05 UTC, kiriakov wrote:

How to create option type over std.sumtype ?


```
enum None;
struct Some(T) { T x; }
alias Option = SumType!(Some!T, None);
```
I get
Error: undefined identifier `T`


Try
```d
alias Option(T) = SumType!(Some!T, None);
```

Your version of `Option` isn't a template, so it doesn't know 
what `T` is. This version uses the eponymous template syntax for 
aliases.


Re: Mir-algorithm tutorial?

2023-08-18 Thread jmh530 via Digitalmars-d-learn

On Friday, 18 August 2023 at 08:06:10 UTC, Ki Rill wrote:

On Friday, 18 August 2023 at 07:57:05 UTC, Ki Rill wrote:

On Friday, 18 August 2023 at 07:54:04 UTC, Ki Rill wrote:

Is there an up-to-date tutorial?

It's just painful that I cannot find anything helpful on this 
topic. The official mir-algorithm GitHub repo links to 
articles with old code that won't build if I copy-paste it. 
I'm left hunting down the changes and guessing how things 
should really work.


[API documentation](http://docs.algorithm.dlang.io) link about 
mir-algorithm from [dlang 
tour](https://tour.dlang.org/tour/mir/dub/mir-algorithm) does 
not work.


I opened the 
[issue](https://github.com/dlang-tour/core/issues/788).


I think the issue with dlang tour is unrelated to mir-algorithm.


Re: Mir-algorithm tutorial?

2023-08-18 Thread jmh530 via Digitalmars-d-learn

On Friday, 18 August 2023 at 12:14:45 UTC, Ferhat Kurtulmuş wrote:

On Friday, 18 August 2023 at 09:57:11 UTC, Ki Rill wrote:

[...]


Yes there isn't many guides around. Those are some of them.

https://tastyminerals.github.io/tasty-blog/dlang/2020/03/22/multidimensional_arrays_in_d.html

https://jackstouffer.com/blog/nd_slice.html

Also this piece of code was converted from python-numpy to 
d-mir.


https://github.com/libmir/dcv/blob/master/source/dcv/imgproc/threshold.d#L138

I converted many more from python for DCV.

I think the main problem is the mir libraries won't get updates 
since Ilya recently said that he was not an open source 
developer anymore.


He has said to me he will support them for some time, but won’t 
be adding new features, or something like that.




Re: Visual Studio 2022 no longer debugs D program, need an alternative debugger for Windows

2023-08-26 Thread jmh530 via Digitalmars-d-learn

On Saturday, 26 August 2023 at 16:57:42 UTC, solidstate1991 wrote:
After a recent update, Visual Studio 2022 started to have 
serious troubles with D, namely having troubles with displaying 
debug variables, and growing constantly in memory until you 
either stop debugging or crashes Windows.


Currently I'm resorting to use x64dbg, which is currently the 
best I can use, and I'll try to look up some info on using it 
not as a reverse engineer tool, but as an actual debugger.


You should report this to bugzilla.


Re: Symbolic computations in D

2023-10-30 Thread jmh530 via Digitalmars-d-learn

On Sunday, 29 October 2023 at 10:44:03 UTC, ryuukk_ wrote:

[snip]

This is sad that people recommend OOP for this

Julia doesn't have OOP and it took over, and that's what i'd 
recommend your students to check, C++ is a dead end, Julia it 
is for mathematical computing


If D had tagged union and pattern matching, it would be a great 
candidate to succeed in that field


Julia is more an alternative to R, Matlab, Python than C++.


Re: Symbolic computations in D

2023-10-30 Thread jmh530 via Digitalmars-d-learn

On Monday, 30 October 2023 at 13:24:56 UTC, Sergey wrote:

On Monday, 30 October 2023 at 13:13:47 UTC, jmh530 wrote:

On Sunday, 29 October 2023 at 10:44:03 UTC, ryuukk_ wrote:
Julia is more an alternative to R, Matlab, Python than C++.


Not really.

Many especially popular and widely used (NumPy, PyTorch, 
data.table) libraries for R and Python implemented with C/C++. 
Without using them, it is just impossible to get good 
performance.


So there is "2 language" problem. Someone should create C++ 
engine + Python/R interface.
Julia propose to solve this issue - since you are able to 
implement fast engine and interface both in Julia.


There are aspects of Julia that are certainly nice. I'm just 
saying that most users of Julia would be more likely to use that 
instead R/Matlab/Python, rather than instead of C++.


There are probably many areas where with R or Python, you would 
normally implement it with C or C++, whereas with Julia you could 
probably do just as well with raw Julia. However, that's not to 
say that Julia doesn't also rely on that same approach when it is 
beneficial. For instance, it can use standard BLAS/LAPACK 
libraries [1] for linear algebra that are written in C.


There's nothing really wrong with that. They shouldn't re-write 
the wheel if there is already a highly performant solution.


[1] https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/


Re: Using C header libs with importC

2024-01-09 Thread jmh530 via Digitalmars-d-learn

On Monday, 8 January 2024 at 21:56:10 UTC, Renato wrote:

[snip]

Importing .h files from d files isn't supported yet because of 
a dispute with the lookup priority:


https://issues.dlang.org/show_bug.cgi?id=23479
https://issues.dlang.org/show_bug.cgi?id=23547


Ah, too bad. Anyway, I was just playing around... disappointing 
that it doesn't work but I don't really need this for now.


Disappointing I know, but there is the workaround of creating a C 
file and included the h file in it. Not elegant though.


1 new

2019-08-02 Thread jmh530 via Digitalmars-d-learn
When I navigate to https://forum.dlang.org/ I have a message that 
says "1 new reply" to "your posts." Normally, I click on that "1 
new reply" and find the post that's new, go to it, and the 
message disappears. However, it doesn't seem to go away anymore. 
I tried looking at many different old posts without luck. At one 
point it was up to "2 new replies," but I viewed that other post 
and it went back down to "1 new reply." Does anyone else have 
this?


Re: matplotlibD returns

2019-09-17 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 17 September 2019 at 01:54:01 UTC, Brett wrote:

How does one get return values?

https://matplotlib.org/3.1.0/gallery/statistics/hist.html

Shows that python uses return values to set properties of the 
plot


https://github.com/koji-kojiro/matplotlib-d/blob/master/examples/source/app.d

Does not give any examples of return values and when trying 
gives error about void returns.


If I'm reading the following comment [1] correctly, this feature 
may require changes to matplotlib-d to implement.



[1] 
https://github.com/koji-kojiro/matplotlib-d/issues/7#issuecomment-274001515


Re: C#'s 'is' equivalent in D

2019-10-10 Thread jmh530 via Digitalmars-d-learn

On Thursday, 10 October 2019 at 15:47:58 UTC, Just Dave wrote:

In C# you can do something like:


if (obj is Person)
{
var person = obj as Person;
// do stuff with person...
}

where you can check the type of an object prior to casting. 
Does D have a similar mechanism? It's so widely useful in the 
C# realm that they even added syntactic sugar to allow:


if (obj is Person person)
{
// do stuff with person...
}

I would presume since D has reference objects there must exist 
some mechanism for this...


You mean something like below:

class Person {
int id;
this(int x) {
id = x;
}
}

void main() {
auto joe = new Person(1);
if (is(typeof(joe) == Person)) {
assert(joe.id == 1);
}
}


Re: C#'s 'is' equivalent in D

2019-10-10 Thread jmh530 via Digitalmars-d-learn

On Thursday, 10 October 2019 at 16:33:47 UTC, H. S. Teoh wrote:
On Thu, Oct 10, 2019 at 03:58:02PM +, jmh530 via 
Digitalmars-d-learn wrote:

On Thursday, 10 October 2019 at 15:47:58 UTC, Just Dave wrote:
> In C# you can do something like:
> 
> 
> if (obj is Person)

> {
> var person = obj as Person;
> // do stuff with person...
> }

[...]

You mean something like below:

class Person {
int id;
this(int x) {
id = x;
}
}

void main() {
auto joe = new Person(1);
if (is(typeof(joe) == Person)) {
assert(joe.id == 1);
}
}


Unfortunately, typeof is a compile-time construct, so this will 
not work if you're receiving a Person object via a base class 
reference. The correct solution is to cast the base class to 
the derived type, which will yield null if it's not an instance 
of the derived type.



T


Ah, you mean something like below:

class Person {
int id;
this(int x) {
id = x;
}
}

class Employee : Person {
int job_id;
this(int x, int y) {
super(x);
job_id = y;
}
}

void main() {
import std.stdio : writeln;

Person joe = new Employee(1, 2);

if (is(typeof(joe) == Employee)) {
writeln("here"); //not called in this case
}
}



Re: How Different Are Templates from Generics

2019-10-12 Thread jmh530 via Digitalmars-d-learn
On Friday, 11 October 2019 at 17:50:42 UTC, Jonathan M Davis 
wrote:

[snip]


A very thorough explanation!

One follow-up question: would it be possible to mimic the 
behavior of Java generics in D?


Re: How Different Are Templates from Generics

2019-10-12 Thread jmh530 via Digitalmars-d-learn
On Saturday, 12 October 2019 at 21:44:57 UTC, Jonathan M Davis 
wrote:

[snip]



Thanks for the reply.

As with most people, I don't write a lot of D code that uses 
classes that much.


The use case I'm thinking of is with allocators, which - to be 
honest - is not something I deal with much in my own code. 
Basically, some of the examples have stuff like 
ScopedAllocator!Mallocator, which would imply that there is a 
different ScopedAllocator for each allocator. However, if you 
apply Java's generics, then you would just have one. Not sure if 
it would make any kind of difference in real-life code, but still 
interesting to think about.


Re: Reading parquet files from D

2019-10-14 Thread jmh530 via Digitalmars-d-learn

On Monday, 14 October 2019 at 19:27:04 UTC, Andre Pany wrote:

[snip]

I found this tool https://github.com/gtkd-developers/gir-to-d 
from Mike Wey which translates GObject GIR files to D headers.


It might be interesting for some of this functionality to get 
included in dpp.


Re: Meta question - what about moving the D - Learn Forum to a seperate StackExchange platform?

2019-10-18 Thread jmh530 via Digitalmars-d-learn
On Friday, 18 October 2019 at 07:35:21 UTC, Martin Tschierschke 
wrote:

[snip]


I think this is something that's been proposed before, but most 
people are happy with just asking a question here and usually 
people are pretty good about helping out with answers when 
possible.


Re: Good way let low-skill people edit CSV files with predefined row names?

2019-10-24 Thread jmh530 via Digitalmars-d-learn

On Thursday, 24 October 2019 at 16:20:20 UTC, jmh530 wrote:

On Thursday, 24 October 2019 at 16:03:26 UTC, Dukc wrote:

[snip]


If they are only opening it in Excel, then you can lock cells. 
You should be able to do that with VBA.


At least I know it works with xlsx files. Not sure on csv now 
that I think on it.


Re: Good way let low-skill people edit CSV files with predefined row names?

2019-10-24 Thread jmh530 via Digitalmars-d-learn

On Thursday, 24 October 2019 at 16:03:26 UTC, Dukc wrote:

[snip]


If they are only opening it in Excel, then you can lock cells. 
You should be able to do that with VBA.


Re: Good way let low-skill people edit CSV files with predefined row names?

2019-10-24 Thread jmh530 via Digitalmars-d-learn

On Thursday, 24 October 2019 at 17:41:21 UTC, Dukc wrote:



This was wrong: Atila's Excel-d enables writing plugin 
functions, but not reading the spreadsheets. There are other 
DUB utilities for that, though.


I quess I will give my employer two options: Either the price 
variables are in an one-column CSV and I distribute the key 
column separately so they don't mess it up, or I take my time 
to do a GUI solution.


Unless somebody has better ideas?


It seems to me that Excel forgets these settings for csv files. I 
tried locking/protecting some rows and it works fine when it is 
open, but then Excel forgets it when you save/close/reopen, which 
makes some sense when you think about it. So you'd have to give 
up the csv and use xlsx if your users were doing it with Excel.


Another solution might be to create another file type that like a 
csv+. However, you'd have to then have a way to open said csv+, 
which brings you back to the GUI situation.


Re: Running unittests of a module with -betterC

2019-10-30 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 29 October 2019 at 08:45:15 UTC, mipri wrote:

[snip]

-unittest sets the 'unittest' version identifier. So this works:

  unittest {
 assert(0);
  }

  version(unittest) {
  extern(C) void main() {
  static foreach(u; __traits(getUnitTests, 
__traits(parent, main)))

  u();
  }
  }

dmd -betterC -unittest -run module.d


I feel like this should be added into the compiler so that it 
just works.


Re: Running unittests of a module with -betterC

2019-10-30 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 30 October 2019 at 15:09:40 UTC, jmh530 wrote:

[snip]

I feel like this should be added into the compiler so that it 
just works.


Hmm, maybe only when compiled with -main, but I don't think 
there's a version for that.


Re: Running unittests of a module with -betterC

2019-10-30 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 30 October 2019 at 18:45:50 UTC, Jacob Carlborg 
wrote:

On 2019-10-30 16:09, jmh530 wrote:

I feel like this should be added into the compiler so that it 
just works.


This will only run the unit tests in the current modules. The 
standard way of running the unit tests will run the unit tests 
in all modules.


That's a fair point, but the broader point I was trying to make 
was that anything that makes unittests easier to use betterC code 
is a good thing.


It seems as if there are three underlying issues here that need 
to be addressed to improve the usefulness of unittests in betterC 
code: 1) a way to gather the unittests from all modules (your 
point), 2) fixing -main for betterC, 3) a way to ensure that said 
unittests are called.


The first suggests to me that it would not be such a bad thing to 
generate ModuleInfo when -unittest is called with -betterC or at 
least just the ModuleInfo needed to aggregate the unittests from 
different modules. This functionality might need to be opt-in.


The second is pretty obvious. dmd -main -betterC is inserting a D 
main function instead of a C one. I submitted a bug request

https://issues.dlang.org/show_bug.cgi?id=20340
as this should be pretty easy to fix.

The final point depends on the two above being resolved. If dmd 
-unittest -main -betterC is called, then the compiler would be 
creating the main function so it can insert any code needed to 
run the unittests (assuming issue 1 above is resolved). By 
contrast, if just dmd -unittest -betterC is called and the user 
has created their own main, then it would be like having to run a 
shared module constructor, which is disabled in betterC. Again, I 
would assume that the benefits would outweigh the costs in 
allowing something like this on an opt-in basis, but the 
available options would be to either a) use -main or b) create a 
mixin that generates the needed unittest code so that people can 
insert it at the top of their main function on their own.






CI: Why Travis & Circle

2019-11-14 Thread jmh530 via Digitalmars-d-learn
I'm curious what the typical motivation is for using both Travis 
CI and Circle CI in a project is.


Thanks.


Re: CI: Why Travis & Circle

2019-11-14 Thread jmh530 via Digitalmars-d-learn

On Thursday, 14 November 2019 at 17:06:36 UTC, Andre Pany wrote:

[snip]

With the public availability of Github Actions I highly 
recommend it if you have open source project on Github. If is 
free and works well with D and Dub.


Kind regards
Andre


I'm not that familiar with Github Actions, but I should get more 
familiar with it.


But my broader question is why both? Don't they both do largely 
the same things?


I was motivated to ask this by looking at the mir repositories, 
which have both.

https://github.com/libmir/mir


Re: CI: Why Travis & Circle

2019-11-16 Thread jmh530 via Digitalmars-d-learn
On Saturday, 16 November 2019 at 09:07:45 UTC, Petar Kirov 
[ZombineDev] wrote:

[snip]

Most likely the reason is parallelism. Every CI service offers 
a limited amount of agents that can run in parallel, which 
limits the number of test matrix combinations that you can run 
in a reasonable amount of time. For example, many of the major 
D projects are tested across different OSes and several 
versions of D compilers. Additionally some CIs are faster than 
others. In my experience CircleCI is faster than TravisCI by a 
large margin.




Thank you for the very insightful answer.


Re: matrix operations

2019-11-27 Thread jmh530 via Digitalmars-d-learn
On Wednesday, 27 November 2019 at 16:16:04 UTC, René Heldmaier 
wrote:

Hi,

I'm looking for some basic matrix/vector operations and other 
numeric stuff.


I spent quite a lot time in reading through the mir 
documentation, but i kinda miss the bigger picture. I'm not a 
Python user btw. (I know C,C++,C#,Matlab..).


I have also looked at the documentation of the lubeck package.

What i have seen right now reminds me of the saying "Real 
programmers can write FORTRAN in any language".


Is there a type to do matrix operations with nice syntax (e.g. 
using * operator for multiplication)?


Matrix/vector operations can be done with lubeck, which itself is 
built upon mir. mtimes is the one for matrix multiplication.


I would not bank on any changes in operator overloading (e.g. 
allowing an operator for matrix multiplication) any time soon.


Re: 2D matrix operation (subtraction)

2020-02-21 Thread jmh530 via Digitalmars-d-learn

On Friday, 21 February 2020 at 11:53:02 UTC, Ali Çehreli wrote:

[snip]
auto byColumn(R)(R range, size_t n) {
  return Column!R(range, n);
}


mir has byDim for something similar (numir also has alongDim).

This is how you would do it:

import mir.ndslice;

void main() {
auto x = [0.0, 1.4, 1.0, 5.2, 2.0, 0.8].sliced(3, 2);
x.byDim!1.front.each!"a -= 2";
}

My recollection is that it is a little bit trickier if you want 
to subtract a vector from each column of a matrix (the sweep 
function in R).


Re: 2D matrix operation (subtraction)

2020-02-21 Thread jmh530 via Digitalmars-d-learn

On Friday, 21 February 2020 at 14:43:37 UTC, jmh530 wrote:

[snip]


Actually, I kind of prefer the relevant line as
x.byDim!1[0].each!"a -= 2";
which makes it a little clearer that you can easily change [0] to 
[1] to apply each to the second column instead.


Re: 2D matrix operation (subtraction)

2020-02-25 Thread jmh530 via Digitalmars-d-learn

On Saturday, 22 February 2020 at 08:29:32 UTC, 9il wrote:

[snip]

Maybe mir.series [1] can work for you.


I had a few other thoughts after looking at septc's solution of 
using

y[0..$, 0] *= 100;
to do the calculation.

1) There is probably scope for an additional select function to 
handle the use case of choosing a specific row/column. For 
instance, what if instead of

y[0..$, 0]
you want
y[0..$, b, 0..$]
for some arbitrary b. I think you would need to do something like
y.select!1(b, b + 1);
which doesn't have the best API, IMO, because you have to repeat 
b. Maybe just an overload for select that only takes one input 
instead of two?


2) The select series of functions does not seem to work as easily 
as array indexing does. When I tried to use the 
select/selectFront functions to do what he is doing, I had to 
something like

auto z = y.selectFront!1(1);
z[] *= 100;
This would adjust y as expected (not z). However, I couldn't 
figure out how to combine these together to one line. For 
instance, I couldn't do

y.selectFront!1(1) *= 100;
or
auto y = x.selectFront!1(1).each!(a => a * 100);
though something like
y[0..$, 0].each!"a *= 100";
works without issue.

It got a little frustrating to combine those with any kind of 
iteration. TBH, I think more than the select functions, the 
functionality I would probably be looking for is more what I was 
doing with byDim!1[0] in the prior post.


I could imagine some very simple version looking like below
auto selectDim(size_t dim, T)(T x, size_t a, size_t b) {
return byDim!dim[a .. b];
}
with a corresponding version
auto selectDim(size_t dim, T)(T x, size_t a) {
return byDim!dim[a .. (a + 1)];
}
This simple version would only work with one dimension, even 
though byDim can handle multiple.


Re: How to sum multidimensional arrays?

2020-02-27 Thread jmh530 via Digitalmars-d-learn

On Thursday, 27 February 2020 at 15:28:01 UTC, p.shkadzko wrote:

On Thursday, 27 February 2020 at 14:15:26 UTC, p.shkadzko wrote:
This works but it does not look very efficient considering we 
flatten and then calling array twice. It will get even worse 
with 3D arrays.


And yes, benchmarks show that summing 2D arrays like in the 
example above is significantly slower than in numpy. But that 
is to be expected... I guess.


D -- sum of two 5000 x 6000 2D arrays: 3.4 sec.
numpy -- sum of two 5000 x 6000 2D arrays: 0.0367800739913946 
sec.


What's the performance of mir like?

The code below seems to work without issue.

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.17"
dependency "mir-random" version="~>2.2.10"
+/
import std.stdio : writeln;
import mir.random : Random, unpredictableSeed;
import mir.random.variable: UniformVariable;
import mir.random.algorithm: randomSlice;

auto rndMatrix(T)(T max, in int rows, in int cols)
{
auto gen = Random(unpredictableSeed);
auto rv = UniformVariable!T(0.0, max);
return randomSlice(gen, rv, rows, cols);
}

void main() {
auto m1 = rndMatrix(10.0, 2, 3);
auto m2 = rndMatrix(10.0, 2, 3);
auto m3 = m1 + m2;

writeln(m1);
writeln(m2);
writeln(m3);
}


Re: How to sum multidimensional arrays?

2020-02-27 Thread jmh530 via Digitalmars-d-learn

On Thursday, 27 February 2020 at 16:39:15 UTC, 9il wrote:

[snip]
Few performances nitpick for your example to be fair with 
benchmarking againt the test:

1. Random (default) is slower then Xorfish.
2. double is twice larger then int and requires twice more 
memory, so it would be twice slower then int for large matrixes.


Check the prev. post, we have posted almost in the same time ;)
https://forum.dlang.org/post/izoflhyerkiladngy...@forum.dlang.org


Those differences largely came from a lack of attention to 
detail. I didn't notice the Xorshift until after I posted. I used 
double because it's such a force of habit for me to use 
continuous distributions.


I came across this in the documentation.
UniformVariable!T uniformVariable(T = double)(in T a, in T b)
if(isIntegral!T)
and did a double-take until I read the note associated with it in 
the source.


Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread jmh530 via Digitalmars-d-learn

On Sunday, 1 March 2020 at 20:58:42 UTC, p.shkadzko wrote:

Hello again,

[snip]



What compiler did you use and what flags?


Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread jmh530 via Digitalmars-d-learn

On Monday, 2 March 2020 at 13:35:15 UTC, p.shkadzko wrote:

[snip]


Thanks. I don't have time right now to review this thoroughly. My 
recollection is that the dot product of two matrices is actually 
matrix multiplication, correct? It generally makes sense to defer 
to other people's implementation of this. I recommend trying 
lubeck's version against numpy. It uses a blas/lapack 
implementation. mir-glas, I believe, also has a version.


Also, I'm not sure if the fastmath attribute would do anything 
here, but something worth looking into.




Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread jmh530 via Digitalmars-d-learn

On Monday, 2 March 2020 at 18:17:05 UTC, p.shkadzko wrote:

[snip]
I tested @fastmath and @optmath for toIdx function and that 
didn't change anyting.


@optmath is from mir, correct? I believe it implies @fastmath. 
The latest code in mir doesn't have it doing anything else at 
least.


Re: Improving dot product for standard multidimensional D arrays

2020-03-02 Thread jmh530 via Digitalmars-d-learn

On Monday, 2 March 2020 at 20:22:55 UTC, p.shkadzko wrote:

[snip]

Interesting growth of processing time. Could it be GC?

+--+-+
| matrixDotProduct | time (sec.) |
+--+-+
| 2x[100 x 100]|0.01 |
| 2x[1000 x 1000]  |2.21 |
| 2x[1500 x 1000]  | 5.6 |
| 2x[1500 x 1500]  |9.28 |
| 2x[2000 x 2000]  |   44.59 |
| 2x[2100 x 2100]  |   55.13 |
+--+-+


Your matrixDotProduct creates a new Matrix and then returns it. 
When you look at the Matrix struct, it is basically building upon 
D's GC-backed slices. So yes, you are using the GC here.


You could try creating the output matrices outside of the 
matrixDotProduct function and then pass them by pointer or 
reference into the function if you want to profile just the 
calculation.


Re: Improving dot product for standard multidimensional D arrays

2020-03-03 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 3 March 2020 at 10:25:27 UTC, maarten van damme wrote:
it is difficult to write an efficient matrix matrix 
multiplication in any language. If you want a fair comparison, 
implement your naive method in python and compare those timings.

[snip]


And of course there's going to be a big slowdown in using native 
python. Numpy basically calls blas in the background. A naive C 
implementation might be another comparison.


Re: How to sort 2D Slice along 0 axis in mir.ndslice ?

2020-03-10 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 10 March 2020 at 23:31:55 UTC, p.shkadzko wrote:

[snip]


Below does the same thing as the numpy version.

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.18"
+/
import mir.ndslice.sorting : sort;
import mir.ndslice.topology : byDim;
import mir.ndslice.slice : sliced;

void main() {
auto m = [1, -1, 3, 2, 0, -2, 3, 1].sliced(2, 4);
m.byDim!0.each!(a => a.sort);
}


Re: How to sort 2D Slice along 0 axis in mir.ndslice ?

2020-03-11 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 11 March 2020 at 06:12:55 UTC, 9il wrote:

[snip]

Almost the same, just fixed import for `each` and a bit polished

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.18"
+/
import mir.ndslice;
import mir.ndslice.sorting;
import mir.algorithm.iteration: each;

void main() {
auto m = [[1, -1, 3, 2],
  [0, -2, 3, 1]].fuse;
m.byDim!0.each!sort;

import std.stdio;
m.byDim!0.each!writeln;
}


Doh on the 'each' import.

Also, I don't think I had used fuse before. That's definitely 
helpful.


Re: dub libs from home directory on windows

2020-03-18 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 18 March 2020 at 15:10:52 UTC, Виталий Фадеев wrote:

On Wednesday, 18 March 2020 at 13:52:20 UTC, Abby wrote:


I cannot build my app, so I was wondering if there is some 
clever way to solve this without hardcoded path to my profile 
name.


Thank you very much for your help.


I see, you want without hardcoded path...


I usually something like ./folder/file.extension to avoid a 
hardcoded path.


I also recommend taking a look at some other dub files to get a 
sense of how others do it.


Re: Blog post about multidimensional arrays in D

2020-03-27 Thread jmh530 via Digitalmars-d-learn

On Friday, 27 March 2020 at 10:57:10 UTC, p.shkadzko wrote:
I decided to write a small blog post about multidimensional 
arrays in D on what I learnt so far. It should serve as a brief 
introduction to Mir slices and how to do basic manipulations 
with them. It started with a small file with snippets for 
personal use but then kind of escalated into an idea of a blog 
post.


However, given the limited about of time I spent in Mir docs 
and their conciseness, it would be great if anyone had a second 
look and tell me what is wrong or missing because I have a 
feeling a lot of things might. It would be a great opportunity 
for me to learn and also improve it or rewrite some parts.


All is here: 
https://github.com/tastyminerals/tasty-blog/blob/master/_posts/2020-03-22-multidimensional_arrays_in_d.md


Thanks for doing this.

A small typo on this line
a.byDim1;

I think there would be a lot of value in doing another blogpost 
to cover some more advanced topics. For instance, mir supports 
three different SliceKinds and the documentation explaining the 
difference has never been very clear. I don't really feel like 
I've ever had a clear understanding of the low-level differences 
between them. The pack/ipack/unpack functions are also pretty 
hard to understand from the documentation.


Re: @safe function with __gshared as default parameter value

2020-04-08 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 8 April 2020 at 18:50:16 UTC, data pulverizer wrote:

On Wednesday, 8 April 2020 at 16:53:05 UTC, Anonymouse wrote:

```
import std.stdio;

@safe:

__gshared int gshared = 42;

void foo(int i = gshared)
{
writeln(i);
}

void main()
{
foo();
}
```

This currently works; `foo` is `@safe` and prints the value of 
`gshared`. Changing the call in main to `foo(gshared)` errors.


Should it work, and can I expect it to keep working?


According to the manual it shouldn't work at all 
https://dlang.org/spec/function.html#function-safety where it 
says Safe Functions: "Cannot access __gshared variables.", I 
don't know why calling as `foo()` works.


You still wouldn't be able to manipulate gshared within the 
function. Though it may still be a problem for @safe...


import std.stdio;

__gshared int gshared = 42;

@safe void foo(int i = gshared)
{
i++;
writeln(i);
}

void main()
{
writeln(gshared);
foo();
writeln(gshared);
gshared++;
writeln(gshared);
foo();
writeln(gshared);
}


Re: @safe function with __gshared as default parameter value

2020-04-08 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 8 April 2020 at 19:29:17 UTC, Anonymouse wrote:

[snip]

It works with `ref int` too.


```
__gshared int gshared = 42;

void foo(ref int i = gshared) @safe
{
++i;
}
void main()
{
assert(gshared == 42);
foo();
assert(gshared == 43);
}
```


Well that definitely shouldn't happen. I would file a bug report.


mir: How to change iterator?

2020-04-14 Thread jmh530 via Digitalmars-d-learn
In the code below, I multiply some slice by 5 and then check 
whether it equals another slice. This fails for mir's approxEqual 
because the two are not the same types (yes, I know that isClose 
in std.math works). I was trying to convert the y variable below 
to have the same double* iterator as the term on the right, but 
without much success. I tried std.conv.to and the as, slice, and 
sliced functions in mir.


I figure I am missing something basic, but I can't quite figure 
it out...



/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.math.common: approxEqual;
import mir.ndslice.slice : sliced;

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;

assert(approxEqual(y, [2.5, 2.5].sliced(2)));
}


Re: mir: How to change iterator?

2020-04-16 Thread jmh530 via Digitalmars-d-learn

On Thursday, 16 April 2020 at 19:59:57 UTC, Basile B. wrote:

[snip]

And remove the extra assert() BTW... I don't know why this is 
accepted.


Thanks, I hadn't realized about approxEqual. I think that 
resolves my near-term issue, I would need to play around with 
things a little more to be 100% sure though.


That being said, I'm still unsure of what I would need to do to 
get the following code to compile.


/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.ndslice;

void foo(Iterator, SliceKind kind)(Slice!(Iterator, 1, kind) x, 
Slice!(Iterator, 1, kind) y) {

import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y);
}


Re: Multiplying transposed matrices in mir

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Sunday, 19 April 2020 at 17:07:36 UTC, p.shkadzko wrote:

I'd like to calculate XX^T where X is some [m x n] matrix.

// create a 3 x 3 matrix
Slice!(double*, 2LU) a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 
2.1].sliced(3, 3);

auto b = a * a.transposed; // error

Looks like it is not possible due to "incompatible types for 
(a) * (transposed(a)): Slice!(double*, 2LU, 
cast(mir_slice_kind)2) and Slice!(double*, 2LU, 
cast(mir_slice_kind)0)"


I'd like to understand why and how should this operation be 
performed in mir.
Also, what does the last number "0" or "2" means in the type 
definition "Slice!(double*, 2LU, cast(mir_slice_kind)0)"?


2 is Contiguous, 0 is Universal, 1 is Canonical. To this day, I 
don’t have the greatest understanding of the difference.


Try the mtimes function in lubeck.


Re: Multiplying transposed matrices in mir

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Sunday, 19 April 2020 at 17:55:06 UTC, p.shkadzko wrote:

snip

So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())".


There are elementwise operation on two matrices of the same size 
and then there is matrix multiplication. Two different things. 
You had initially said using an mxn matrix to do the calculation. 
Elementwise multiplication only works for matrices of the same 
size, which is only true in your transpose case when they are 
square. The mtimes function is like dot or @ in python and does 
real matrix multiplication, which works for generic mxn matrices. 
If you want elementwise multiplication of a square matrix and 
it’s transpose in mir, then I believe you need to call 
assumeContiguous after transposed.


Re: Multiplying transposed matrices in mir

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Sunday, 19 April 2020 at 19:20:28 UTC, p.shkadzko wrote:

[snip]
well no, "assumeContiguous" reverts the results of the 
"transposed" and it's "a * a".
I would expect it to stay transposed as NumPy does "assert 
np.all(np.ascontiguous(a.T) == a.T)".


Ah, you're right. I use it in other places where it hasn't been 
an issue.


I can do it with an allocation (below) using the built-in syntax, 
but not sure how do-able it is without an allocation (Ilya would 
know better than me).


/+dub.sdl:
dependency "lubeck" version="~>1.1.7"
dependency "mir-algorithm" version="~>3.7.28"
+/
import mir.ndslice;
import lubeck;

void main() {
auto a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 
2.1].sliced(3, 3);

auto b = a * a.transposed.slice;
}


Re: Multiplying transposed matrices in mir

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:

[snip]

Thanks. I somehow missed the whole point of "a * a.transposed" 
not working because "a.transposed" is not allocated.


a.transposed is just a view of the original matrix. Even when I 
tried to do a raw for loop I ran into issues because modifying 
the original a in any way caused all the calculations to be wrong.


Honestly, it's kind of rare that I would do an element-wise 
multiplication of a matrix and its transpose.


Re: mir: How to change iterator?

2020-04-19 Thread jmh530 via Digitalmars-d-learn

On Thursday, 16 April 2020 at 20:59:36 UTC, jmh530 wrote:

[snip]

/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.ndslice;

void foo(Iterator, SliceKind kind)(Slice!(Iterator, 1, kind) x, 
Slice!(Iterator, 1, kind) y) {

import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y);
}


This is really what I was looking for (need to make allocation, 
unfortunately)


/+dub.sdl:
dependency "mir-algorithm" version="~>3.7.28"
+/

import mir.ndslice;

void foo(Iterator, SliceKind kind)(Slice!(Iterator, 1, kind) x, 
Slice!(Iterator, 1, kind) y) {

import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y.slice);
}


Re: mir: How to change iterator?

2020-04-20 Thread jmh530 via Digitalmars-d-learn

On Monday, 20 April 2020 at 00:27:40 UTC, 9il wrote:

[snip]

Using two arguments Iterator1, Iterator2 works without 
allocation


/+dub.sdl: dependency "mir-algorithm" version="~>3.7.28" +/
import mir.ndslice;

void foo(Iterator1, Iterator2, SliceKind kind)
(Slice!(Iterator1, 1, kind) x, Slice!(Iterator2, 1, kind) y)
{
import std.stdio : writeln;
writeln("here");
}

void main() {
auto x = [0.5, 0.5].sliced(2);
auto y = x * 5.0;
foo(x, y);
}


Thanks, but I was thinking about the situation where someone else 
has written the function and didn't allow for multiple iterators 
for whatever reason.


Re: Multiplying transposed matrices in mir

2020-04-20 Thread jmh530 via Digitalmars-d-learn

On Monday, 20 April 2020 at 19:06:53 UTC, p.shkadzko wrote:

[snip]
It is. I was trying to calculate the covariance matrix of some 
dataset X which would be XX^T.


Incorrect. The covariance matrix is calculated with matrix 
multiplication, not element-wise  multiplication. For instance, I 
often work with time series data that is TXN where T > N. 
Couldn't do a calculation with element-wise multiplication in 
that case.


Try using Lubeck's covariance function or checking your results 
with the covariance function in other languages.




Attribute inference within template functions

2020-04-22 Thread jmh530 via Digitalmars-d-learn
I was trying to write a function has different behavior depending 
on whether it is called from @nogc code or not. However, I am 
noticing that this does not seem possible because of the timing 
of attribute inference.


If I call getFunctionAttributes within foo below, then it is 
system but then when called from main they are correctly 
inferred. It is as if the attribute inference happens after the 
getFunctionAttributes is called.


Is there any way to get the correct function attributes within a 
template function?


auto foo(T)(T x) {
pragma(msg, __traits(getFunctionAttributes, foo!T));
pragma(msg, __traits(getFunctionAttributes, foo!int));
return x;
}

void main() {
auto x = foo(1);
pragma(msg, __traits(getFunctionAttributes, foo!int));
}


Implicit Function Template Instantiation (IFTI) Question

2020-04-27 Thread jmh530 via Digitalmars-d-learn
When using a template with multiple functions within it, is it 
possible to access the underlying functions directly? Not sure I 
am missing anything, but what works when the functions are named 
differently from the headline template doesn't work when the 
functions are named the same.


import std.stdio: writeln;
import std.traits: isFunction;

template foo(T) {
void foo(U)(U x) {
writeln("here0");
}

void foo(U, V)(U x, V y) {
writeln("there0");
}
}

template bar(T) {
void baz(U)(U x) {
writeln("here1");
}

void baz(U, V)(U x, V y) {
writeln("there1");
}
}

void foobar(T)(T x) {}

void main() {
foo!int.foo!(float, double)(1f, 2.0); //Error: template 
foo(U)(U x) does not have property foo
writeln(isFunction!(foo!int)); //prints false, as expected 
b/c not smart enough to look through
writeln(isFunction!(foo!int.foo!float)); //Error: template 
identifier foo is not a member of template 
onlineapp.foo!int.foo(U)(U x)

writeln(isFunction!(foo!int.foo!(float, double))); //ditto

bar!int.baz!(float, double)(1f, 2.0); //prints there1
writeln(isFunction!(bar!int.baz!(float, double))); //prints 
true


writeln(isFunction!(foobar!int)); //prints true
}




Re: Implicit Function Template Instantiation (IFTI) Question

2020-04-27 Thread jmh530 via Digitalmars-d-learn
On Monday, 27 April 2020 at 17:40:06 UTC, Steven Schveighoffer 
wrote:

[snip]


Thanks for that. Very detailed.

In terms of a use case, we just added a center function to mir 
[1]. It can take an alias to a function. I wanted to add a check 
that the arity of the function was 1, but it turned out that I 
couldn't do that for mean [2] because it has a similar structure 
as what I posted and arity relies on isCallable, which depends on 
isFunction.



[1] http://mir-algorithm.libmir.org/mir_math_stat.html#.center
[2] http://mir-algorithm.libmir.org/mir_math_stat.html#.mean



Re: Python eval() equivalent in Dlang working in Runtime?

2020-05-01 Thread jmh530 via Digitalmars-d-learn

On Friday, 1 May 2020 at 15:42:54 UTC, Baby Beaker wrote:

There is a Python eval() equivalent in Dlang working in Runtime?


You might find arsd's script.d interesting [1], but it's more 
like a blend between D and javascript.

[1]https://github.com/adamdruppe/arsd/blob/d0aec8e606a90c005b9cac6fcfb2047fb61b38fa/script.d


CTFE and Static If Question

2020-05-07 Thread jmh530 via Digitalmars-d-learn
I am curious how ctfe and static ifs interact. In particular, if 
an enum bool passed as a template parameter or run-time one will 
turn an if statement into something like a static if statement 
(perhaps after the compiler optimizes other code away). In the 
code below, I am a function that takes a bool as a template 
parameter and another that has it as a run-time parameter. In the 
first, I just pass an compile-time bool (y0) into it. In the 
second I have run-time (x0) and compile-time (y0).


Does foo!y0(rt) generate the same code as foo(rt, y0)?

How is the code generated by foo(rt, x0) different from 
foo(rt,y0)?


auto foo(bool rtct)(int rt) {
static if (rtct)
return rt + 1;
else
return rt;
}

auto foo(int rt, bool rtct) {
if (rtct == true)
return rt + 1;
else
return rt;
}

void main() {
int rt = 3;
bool x0 = true;
bool x1 = false;
assert(foo(rt, x0) == 4);
assert(foo(rt, x1) == 3);

enum y0 = true;
enum y1 = false;
assert(foo!y0(rt) == 4);
assert(foo!y1(rt) == 3);
assert(foo(rt, y0) == 4);
assert(foo(rt, y1) == 3);
}




Re: CTFE and Static If Question

2020-05-07 Thread jmh530 via Digitalmars-d-learn

On Thursday, 7 May 2020 at 15:29:01 UTC, H. S. Teoh wrote:

[snip]


You explained things very well, thanks.


Re: CTFE and Static If Question

2020-05-07 Thread jmh530 via Digitalmars-d-learn

On Thursday, 7 May 2020 at 15:34:21 UTC, ag0aep6g wrote:

[snip]

The `static if` is guaranteed to be evaluated during 
compilation. That means, `foo!y0` effectively becomes this:


auto foo(int rt) { return rt + 1; }

There is no such guarantee for `foo(rt, y0)`. It doesn't matter 
that y0 is an enum.


But a half-decent optimizer will have no problem replacing all 
your calls with their results. Compared with LDC and GDC, DMD 
has a poor optimizer, but even DMD turns this:


int main() {
int rt = 3;
bool x0 = true;
bool x1 = false;
enum y0 = true;
enum y1 = false;
return
foo(rt, x0) +
foo(rt, x1) +
foo!y0(rt) +
foo!y1(rt) +
foo(rt, y0) +
foo(rt, y1);
}

into this:

int main() { return 21; }


Thanks for the reply.

The particular use case I'm thinking of is more like below where 
some function bar (that returns a T) is called before the return. 
Not sure if that matters or not for inlining the results.


T foo(T[] x, bool y) {
if (y)
return bar(x) / x.length;
else
return bar(x) / (x.length - 1);
}


Re: CTFE and Static If Question

2020-05-07 Thread jmh530 via Digitalmars-d-learn

On Thursday, 7 May 2020 at 17:59:30 UTC, Paul Backus wrote:

On Thursday, 7 May 2020 at 15:00:18 UTC, jmh530 wrote:

Does foo!y0(rt) generate the same code as foo(rt, y0)?

How is the code generated by foo(rt, x0) different from 
foo(rt,y0)?


You can look at the generated code using the Compiler Explorer 
at d.godbolt.org. Here's a link to your example, compiled with 
ldc, with optimizations enabled:


https://d.godbolt.org/z/x5K7P6

As you can see, the non-static-if version has a runtime 
comparison and a conditional jump, and the static-if version 
does not. However, it doesn't make a difference in the end, 
because the calls have been optimized out, leaving an empty 
main function.


Thanks for that. I forgot how much nicer godbolt is for looking 
assembly than run.dlang.org. Or maybe it's just that the 
optimized assembly for ldc looks a lot simpler than dmd?


I eventually played around with it for a bit and ended up with 
what's below. When compiled with ldc -O -release, some of the 
functions have code that are generated a little differently than 
the one above (the assembly looks a little prettier when using 
ints than doubles). It looks like there is a little bit of 
template bloat too, in that it generates something like four 
different versions of the run-time function that all basically do 
the same thing (some slight differences I don't really 
understand). Anyway, I think that's the first time I've ever used 
__traits compiles with a static if and I don't think I've ever 
used an template alias bool before, but I thought it was kind of 
cool.



int foo(alias bool rtct)(int[] x) {
static if (__traits(compiles, {static if (rtct) { enum val = 
rtct;}})) {

static if (rtct) {
return bar(x) / cast(int) x.length;
} else {
return bar(x) / cast(int) (x.length - 1);
}
} else {
if (rtct)
return bar(x) / cast(int) x.length;
else
return bar(x) / cast(int) (x.length - 1);
}
}

int foo(int[] x, bool rtct) {
if (rtct)
return foo!true(x);
else
return foo!false(x);
}

int bar(int[] x) {
return x[0] + x[1];
}


void main() {
import std.stdio: writeln;

int[] a = [1, 2, 3, 4, 5];
bool x0 = true;
bool x1 = false;
int result0 = foo(a, x0);
int result1 = foo(a, x1);
int result2 = foo!x0(a);
int result3 = foo!x1(a);

enum y0 = true;
enum y1 = false;
int result0_ = foo(a, y0);
int result1_ = foo(a, y1);
int result2_ = foo!y0(a);
int result3_ = foo!y1(a);
}


Re: Mir Slice.shape is not consistent with the actual array shape

2020-05-24 Thread jmh530 via Digitalmars-d-learn

On Sunday, 24 May 2020 at 14:21:26 UTC, Pavel Shkadzko wrote:

[snip]

Sorry for the typo. It should be "auto arrSlice = a.sliced;"


Try using fuse

/+dub.sdl:
dependency "mir-algorithm" version="*"
+/
import std.stdio;
import std.conv;
import std.array: array;
import std.range: chunks;
import mir.ndslice;

int[] getShape(T : int)(T obj, int[] dims = null)
{
return dims;
}

// return arr shape
int[] getShape(T)(T obj, int[] dims = null)
{
dims ~= obj.length.to!int;
return getShape!(typeof(obj[0]))(obj[0], dims);
}

void main() {
int[] arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 
15, 16];

int[][][] a = arr.chunks(4).array.chunks(2).array;

int err;
writeln(arr);
writeln(a.shape(err));

auto aSlice = a.fuse;
writeln(aSlice);
writeln(aSlice.shape);

}


Static assert triggered in struct constructor that shouldn't be called

2020-05-24 Thread jmh530 via Digitalmars-d-learn
The following code results in the static assert in the 
constructor being triggered, even though I would have thought no 
constructor would have been called. I know that there is an easy 
fix for this (move the static if outside the constructor), but it 
still seems like it doesn't make sense.


enum Foo
{
A,
B,
}

struct Bar(Foo foo)
{
static if (foo == Foo.A)
{
float x = 0.5;
long y = 1;
}
else static if (foo == Foo.B)
{
int p = 1;
}

this(long exp, float x)
{
static if (foo == Foo.A) {
this.y = exp;
this.x = x;
} else {
static assert(0, "Not implemented");
}
}
}

void main()
{
Bar!(Foo.B) x;
}


Re: Static assert triggered in struct constructor that shouldn't be called

2020-05-24 Thread jmh530 via Digitalmars-d-learn

On Sunday, 24 May 2020 at 21:43:34 UTC, H. S. Teoh wrote:
On Sun, May 24, 2020 at 09:34:53PM +, jmh530 via 
Digitalmars-d-learn wrote:
The following code results in the static assert in the 
constructor being triggered, even though I would have thought 
no constructor would have been called. I know that there is an 
easy fix for this (move the static if outside the 
constructor), but it still seems like it doesn't make sense.

[...]

The problem is that static assert triggers when the function is 
compiled (not when it's called), and since your ctor is not a 
template function, it will always be compiled. Hence the static 
assert will always trigger.



T


Thanks. Makes sense.


Re: [GTK-D] dub run leads to lld-link: error: could not open libcmt.lib: no such file or directory

2020-05-26 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 26 May 2020 at 15:16:25 UTC, jmh530 wrote:

[snip]
Another short-term fix might be to try compiling with the -m32 
dflag (need to put in your dub.sdl/json).




Sorry, easier is
dub test --arch=x86



Re: [GTK-D] dub run leads to lld-link: error: could not open libcmt.lib: no such file or directory

2020-05-26 Thread jmh530 via Digitalmars-d-learn

On Wednesday, 13 May 2020 at 15:26:48 UTC, BoQsc wrote:

[snip]

Linking...
lld-link: error: could not open libcmt.lib: no such file or 
directory
lld-link: error: could not open OLDNAMES.lib: no such file or 
directory

Error: linker exited with status 1
C:\D\dmd2\windows\bin\dmd.exe failed with exit code 1.


I just ran into this issue as well. I haven't had a chance to fix 
it on my end, but this is what I've found.


This line
Performing "debug" build using C:\D\dmd2\windows\bin\dmd.exe for 
x86_64.

means that it is compiling a 64bit program on Windows.

On Windows, if you are trying to compile a 64bit program, then it 
will try to link with lld if it cannot find a Microsoft linker 
[1]. The failure is likely due your Microsoft linker (or lld) 
either not being installed properly or wrong version or 
configured improperly. If you don't have Visual Studio Community 
installed, that might be a first step. Another short-term fix 
might be to try compiling with the -m32 dflag (need to put in 
your dub.sdl/json).


[1] https://dlang.org/dmd-windows.html#linking


  1   2   3   4   5   >