Re: Recommendations on porting Python to D

2024-07-17 Thread rkompass via Digitalmars-d-learn

On Monday, 15 July 2024 at 19:40:01 UTC, mw wrote:

On Friday, 12 July 2024 at 18:07:50 UTC, mw wrote:

[...]


FYI, now merged into the main branch:

https://github.com/py2many/py2many/tree/main/pyd


This is great and certainly deserves an own discussion 
contribution in General.


Did you try to convert any of the pystone programs?
This would allow for benchmarking comparisons with e.g. nuitka or 
other approaches of compiled Python.




Re: bool passed by ref, safe or not ?

2024-06-04 Thread rkompass via Digitalmars-d-learn

On Tuesday, 4 June 2024 at 16:58:50 UTC, Basile B. wrote:
question in the header, code in the body, execute on a X86 or 
X86_64 CPU


```d
module test;

void setIt(ref bool b) @safe
{
b = false;
}

void main(string[] args)
{
ushort a = 0b;
bool* b = cast(bool*)&a;
setIt(*b);
assert(a == 0b); // what actually happens
assert(a == 0b1110); // what would be safe
}
```

I understand that the notion of `bool` doesn't exist on X86, 
hence what will be used is rather an instruction that write on 
the lower 8 bits, but with a 7 bits corruption.


Do I corrupt memory here or not ?
Is that a safety violation ?


No everything is fine.
The bool is the same size like byte or char.
So your cast makes &a pointer to a byte.
And this byte has to be made completely zero by setIt, otherwise 
it would not be false in the sense of bool type.


Re: Inconsistent chain (implicitly converts to int)

2024-04-06 Thread rkompass via Digitalmars-d-learn

On Friday, 5 April 2024 at 21:26:10 UTC, Salih Dincer wrote:

On Friday, 5 April 2024 at 21:16:42 UTC, rkompass wrote:


In the first example the int's are converted to doubles (also 
common type).
But they appear as int's because writeln does not write a 
trailing .0.


But it doesn't work as you say! I even tried it on an older 
version and got the same result.


SDB@79


I checked:

```d
import std.stdio,
   std.range,
   std.algorithm;

struct N(T)
{
  T last, step, first;
  bool empty() => first >= last;
  T front() => first;
  auto popFront() => first += step;
}

void main() {
  auto r1 = N!size_t(10, 1, 1);
  auto r2 = N!real(15, .5, 10);

  // r1.chain(r2).writeln;
  // [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 10.5, 11, 11.5, 12, 12.5, 
13, 13.5, 14, 14.5]

  r1.chain(r2).map!(x => typeid(x)).writeln;
  // [real, real, . . . , real]
}
```
and it seems to work as I said.


Re: Inconsistent chain (implicitly converts to int)

2024-04-05 Thread rkompass via Digitalmars-d-learn

On Friday, 5 April 2024 at 16:05:20 UTC, H. S. Teoh wrote:
On Fri, Apr 05, 2024 at 03:18:09PM +, Salih Dincer via 
Digitalmars-d-learn wrote:

Hi everyone,

Technically r1 and r2 are different types of range. Isn't it 
inconsistent to chain both? If not, why is the char type 
converted to int?

[...]

It's not inconsistent if there exists a common type that both 
range element types implicit convert to.




In the first example the int's are converted to doubles (also 
common type).
But they appear as int's because writeln does not write a 
trailing .0.


Re: Two chunks but No allocation

2024-03-28 Thread rkompass via Digitalmars-d-learn

On Thursday, 28 March 2024 at 03:54:05 UTC, Salih Dincer wrote:

On Wednesday, 27 March 2024 at 20:50:05 UTC, rkompass wrote:

This works:


I decided to give the full code. Maybe then it will be better 
understood what I mean. I actually pointed out the indirect 
solution above but it's a bit ugly and I'm sure there must be a 
better way?



I didn't look exactly at you code but at the ranges problem.


Perhaps this is of help:

```d
import std.stdio;
import std.range;
import std.algorithm;

void main() {
  auto fib = (real a, real b) => recurrence!"a[n-1] + a[n-2]"(a, 
b);
  auto golden3 = fib(1,1).chunks(2).map!(r => r.fold!((a, e) => 
a/e)).take(10);

  writeln(golden3);
}
```
I thought what you wanted (and what I found to be an interesting 
problem) was to convert the subranges delivered by `chunks(2)` to 
values that still are generated lazily, without saving them in an 
array (which converts the range type to a higher one), according 
to original range.


You can drop and take from the folded values range.

I got `[1, 0.67, 0.625, 0.619048, 0.618182, 0.618056, 
0.618037, 0.618034, 0.618034, 0.618034]` from the above code.


Re: Why is this code slow?

2024-03-28 Thread rkompass via Digitalmars-d-learn

On Thursday, 28 March 2024 at 14:07:43 UTC, Salih Dincer wrote:

On Thursday, 28 March 2024 at 11:50:38 UTC, rkompass wrote:


Turning back to this: Are there similarly simple libraries for 
C, that allow for

parallel computation?


You can achieve parallelism in C using libraries such as 
OpenMP, which provides a set of compiler directives and runtime 
library routines for parallel programming.


Here’s an example of how you might modify the code to use 
OpenMP for parallel processing:


```c
 . . .

  #pragma omp parallel for reduction(+:result)
  for (int s = ITERS; s >= 0; s -= STEPS) {
result += leibniz(s);
  }
 . . . ```
To compile this code with OpenMP support, you would use a 
command like gcc -fopenmp your_program.c. This tells the GCC 
compiler to enable OpenMP directives. The #pragma omp parallel 
for directive tells the compiler to parallelize the loop, and 
the reduction clause is used to safely accumulate the result 
variable across multiple threads.


SDB@79


Nice, thank you.
It worked endlessly until I saw I had to correct the `for` to
  `for (int s = ITERS; s > ITERS-STEPS; s--)`
Now the result is:
```
3.1415926535897936
Execution time: 0.212483 (seconds).
```
This result is sooo similar!

I didn't know that OpenMP programming could be that easy.
Binary size is 16K, same order of magnitude, although somewhat 
less.

D advantage is gone here, I would say.



Re: Why is this code slow?

2024-03-28 Thread rkompass via Digitalmars-d-learn

On Thursday, 28 March 2024 at 01:09:34 UTC, Salih Dincer wrote:
Good thing you're digressing; I am 45 years old and I still 
cannot say that I am finished as a student! For me this is 
version 4 and it looks like we don't need a 3rd variable other 
than the function parameter and return value:




So we go with another digression. I discovered parallel, also 
avoided the extra variable, as suggested by Salih:


```d
import std.range;
import std.parallelism;
import core.stdc.stdio: printf;
import std.datetime.stopwatch;

enum ITERS = 1_000_000_000;
enum STEPS = 31; // 5 is fine, even numbers (e.g. 10) may give 
bad precision (for math reason ???)


pure double leibniz(int i) {  // sum up the small values first
	double r = (i == ITERS) ? 0.5 * ((i%2) ? -1.0 : 1.0) / (i * 2.0 
+ 1.0) : 0.0;

for (--i; i >= 0; i-= STEPS)
r += ((i%2) ? -1.0 : 1.0) / (i * 2.0 + 1.0);
return r * 4.0;
}

void main() {
auto start = iota(ITERS, ITERS-STEPS, -1).array;
auto sw = StopWatch(AutoStart.yes);
double result = 0.0;
foreach(s; start.parallel)
result += leibniz(s);
double total_time = sw.peek.total!"nsecs";
printf("%.16f\n", result);
printf("Execution time: %f\n", total_time / 1e9);
}
```
gives:
```
3.1415926535897931
Execution time: 0.211667
```
My laptop has 6 cores and obviously 5 are used in parallel by 
this.


The original question related to a comparison between C, D and 
Python.
Turning back to this: Are there similarly simple libraries for C, 
that allow for

parallel computation?



Re: Two chunks but No allocation

2024-03-27 Thread rkompass via Digitalmars-d-learn

On Wednesday, 27 March 2024 at 13:38:29 UTC, Salih Dincer wrote:


So, not works this:

```d
fib(1, 1).take(48)
 //.array
 .chunks(2)
 .map!"a[1] / a[0]"
 .back
 .writeln; // 1.61803
```

Thanks...

SDB@79


This works:

```d
import std.stdio;
import std.range;
import std.algorithm;

void main() {
  auto fib = (real a, real b) => recurrence!"a[n-1] + a[n-2]"(a, 
b);

  auto golden = fib(1, 1).drop(46).take(2).fold!((a, e) => a/e);
  writefln("%.20f", golden);
  writeln("0.61803398874989484820");
}
```


Re: Why is this code slow?

2024-03-27 Thread rkompass via Digitalmars-d-learn
I apologize for digressing a little bit further - just to share 
insights to other learners.


I had the question, why my binary was so big (> 4M), discovered 
the

`gdc -Wall -O2 -frelease -shared-libphobos` options (now >200K).
Then I tried to avoid GC, just learnt about this: The GC in the 
Leibnitz code is there only for the writeln. With a change to 
(again standard C) printf the
`@nogc` modifier can be applied, the binary then gets down to 
~17K, a comparable size of the C counterpart.


Another observation regarding precision:
The iteration proceeds in the wrong order. Adding small 
contributions first and bigger last leads to less loss when 
summing up the small parts below the final real/double LSB limit.


So I'm now at this code (abolishing the avarage of 20 interations 
as unnesseary)


```d
// import std.stdio;  // writeln will lead to the garbage 
collector to be included

import core.stdc.stdio: printf;
import std.datetime.stopwatch;

const int ITERATIONS = 1_000_000_000;

@nogc pure double leibniz(int it) {  // sum up the small values 
first

  double n = 0.5*((it%2) ? -1.0 : 1.0) / (it * 2.0 + 1.0);
  for (int i = it-1; i >= 0; i--)
n += ((i%2) ? -1.0 : 1.0) / (i * 2.0 + 1.0);
  return n * 4.0;
}

@nogc void main() {
double result;
double total_time = 0;
auto sw = StopWatch(AutoStart.yes);
result = leibniz(ITERATIONS);
sw.stop();
total_time = sw.peek.total!"nsecs";
printf("%.16f\n", result);
printf("Execution time: %f\n", total_time / 1e9);
}
```
result:
```
3.1415926535897931
Execution time: 1.068111
```



Re: Why is this code slow?

2024-03-25 Thread rkompass via Digitalmars-d-learn

On Sunday, 24 March 2024 at 23:02:19 UTC, Sergey wrote:

On Sunday, 24 March 2024 at 22:16:06 UTC, rkompass wrote:
Are there some simple switches / settings to get a smaller 
binary?


1) If possible you can use "betterC" - to disable runtime
2) otherwise
```bash
--release --O3 --flto=full -fvisibility=hidden 
-defaultlib=phobos2-ldc-lto,druntime-ldc-lto -L=-dead_strip 
-L=-x -L=-S -L=-lz

```


Thank you. I succeeded with `gdc -Wall -O2 -frelease 
-shared-libphobos`


A little remark:
The approximation to pi is slow, but oscillates up and down much 
more than its average. So doing the average of 2 steps gives many 
more precise digits. We can simulate this by doing a last step 
with half the size:


```d
double leibniz(int it) {
  double n = 1.0;
  for (int i = 1; i < it; i++)
n += ((i%2) ? -1.0 : 1.0) / (i * 2.0 + 1.0);
  n += 0.5*((it%2) ? -1.0 : 1.0) / (it * 2.0 + 1.0);
  return n * 4.0;
}
```
Of course you may also combine the up(+) and down(-) step to one:

1/i - 1/(i+2) = 2/(i*(i+2))

```d
double leibniz(int iter) {
  double n = 0.0;
  for (int i = 1; i < iter; i+=4)
n += 2.0 / (i * (i+2.0));
  return n * 4.0;
}
```
or even combine both approaches. But of, course mathematically 
much more is possible. This was not about approximating pi as 
fast as possible...


The above first approach still works with the original speed, 
only makes the result a little bit nicer.


Re: Why is this code slow?

2024-03-24 Thread rkompass via Digitalmars-d-learn
The term containing the `pow` invocation computes the 
alternating sequence -1, 1, -1, ..., which can be replaced by 
e.g.


```
   immutable int [2] sign = [-1, 1];
   n += sign [i & 1] / (i * 2.0 - 1.0);
```

This saves the expensive call to the pow function.


I used the loop:
```d
for (int i = 1; i < iter; i++)
n += ((i%2) ? -1.0 : 1.0) / (i * 2.0 + 1.0);
```
in both C and D, with gcc and gdc and got average execution times:

--- C -
original:     loop replacement:   -O2:
0.009989   0.003198 ... 0.001335

--- D -
original:  loop replacement:  -O2:
0.230346      0.003083   ...   0.001309

almost no difference.

But the D binary is much larger on my Linux:
 4600920 bytes instead of 15504 bytes for the C version.

Are there some simple switches / settings to get a smaller binary?


Re: How to make a struct containing an associative array to deeply copy (for repeated usage in foreach) ?

2024-03-18 Thread rkompass via Digitalmars-d-learn

@bachmeier
You're not the first one. There's no technical reason for the 
restriction. It's simply a matter of being opposed by those who 
make these decisions on the basis that it's the wrong way to 
program or something like that. Here is a recent thread: 
https://forum.dlang.org/post/ikwphfwevgnsxmdfq...@forum.dlang.org


Thank you for this. Very interesting discussion. And apparently a 
deliberate restriction of flexibility in type conversion.


I will first try to understand better how templates work under 
the hood before joining this discussion.


Given the types S and T in say `templfunc(S, T)(S arg1, T arg2) 
{}`
represent 2 different actual types in the program, does that mean 
that there are 4 versions of the `templfunc` function compiled 
in? (This was the C++ way iirc).
Or are the types T and S are put on the stack like ordinary 
arguments and the usage of arg1 and arg2 within the function is 
enveloped in switches that query these Types?




Re: How to make a struct containing an associative array to deeply copy (for repeated usage in foreach) ?

2024-03-18 Thread rkompass via Digitalmars-d-learn
To solve the problem with the 1-variable and 2-variable versions 
of foreach I
tried opApply and found that the compiler prefers it over opSlice 
and opIndex() (the latter without argument).


My code:

```d
int opApply(int delegate(Variant) foreachblock) const {
int result = 0;
foreach(val; dct) {
result = foreachblock(val);
if (result)
break;
}
return result;
}
int opApply(int delegate(Variant, Variant) foreachblock) const {
int result = 0;
foreach(key, val; dct) {
result = foreachblock(key, val);
if (result)
break;
}
return result;
}
```
So I'm fine with this now.




Re: How to make a struct containing an associative array to deeply copy (for repeated usage in foreach) ?

2024-03-15 Thread rkompass via Digitalmars-d-learn

On Friday, 15 March 2024 at 17:15:56 UTC, monkyyy wrote:

On Friday, 15 March 2024 at 09:03:25 UTC, rkompass wrote:

@Monkyyy: I adopted your solution, it is perfect.

I only have one problem left:

The foreach loop with associative arrays has two cases:

`foreach(key, val; arr)` and `foreach(x; arr)`.
In the second case only the values are iterated.
With the present solution the iteration delivers (key, val) 
tuples.


That will not be fixed in d2 ranges and has no good solutions; 
and my affect over d3 seems to be none. You could ask around 
for the "opApply" solution but I dont know it well (and prefer 
ranges)


d2 Ranges are based on a simplification of stl's ideas and stl 
doesn't support arrays-like iteration well, I wish to change 
that and working on a proof of concept algorthims lib... but 
well, this is unlikely to work.


For d3 if changing the range interface fails, expect to see 
style guides say "prefer explict range starters" 
string.byUnicode and string.byAscii will probably be how they 
kill `autodecoding` and your data stucture having 2 range 
functions as `byKey` and `byKeyValue` will look the same.



Should I do an improvement request somewhere?


I think its been kinda of piecemeal and D1 1D(repetition 
intentional) opSlice is in limbo(it was deprecated, and then 
slightly undepercated in some random chats, its a mess)


for completeness I believe the current state of 1d op overloads 
are:


opIndex(int)
opIndex(key)
opSlice()
opSlice(int, int)
int opDollar()
dollar opDollar()
opSlice(int, dollar)
opBinararyRight("in",K)(key) (opIn was deprecated and shouldn't 
have been)


If your confident in your writing ability id suggest a clean 
slate article based on this list and what the compiler actually 
does(maybe ask around for any I missed) rather than trying to 
untangle this mess


Or write a dip thread "undeperacate d1 opOverloads that are 
still wanted by everyone") and try to bring back opIn at the 
same time and get the limboness of old technically deprecated 
1d array opOverloads officially gone


I'm quite new to D yet. But I have some acquaintance with Python.
Therefore, together with templates the discovery of the Variant 
type was inspiring me to the following:
I wanted to explore if it's possible to do sort of type-agnostic 
programming with D. This could perhaps enable a simpler 
translation of Python code to D.


Trying with a `Variant[Variant] dct;` dictionary I observed that 
even simple assignment of key:value pairs was not possible as the 
different types are not automatically cast to a Variant.


Embedded in a struct with templating and casts to Variant such a 
dict now seems possible:


The preliminary code:

```d
// implement .get .update .require

import std.stdio;
import std.typecons;
import std.range;
import std.variant;
import std.string;
import std.format;

struct dict
{
Variant[Variant] dct;

Variant opIndex(T)(T key) {
return dct[cast(Variant) key];
}
void opIndexAssign(V, T)(V val, T key) {
dct[cast(Variant) key] = cast(Variant) val;
}
auto opBinaryRight(string op : "in", T)(T lhs) {
return cast(Variant)lhs in dct;
}
@property auto keys() {
return dct.keys;
}
@property auto values() {
return dct.values;
}
auto remove(T)(T key) {
return dct.remove(cast(Variant) key);
}
@property auto dup() {
dict newd;
foreach (k; dct.keys)  // do a deep copy
newd.dct[k] = dct[k];
return newd;
}
	void toString(scope void delegate(const(char)[]) sink, 
FormatSpec!char fmt) {

put(sink, "dict([");
bool rep = false;
foreach (k; dct.keys) {
if (rep)
put(sink, ", ");
formatValue(sink, k, fmt);
put(sink, ":");
formatValue(sink, dct[k], fmt);
rep = true;
}
put(sink, "])");
}
auto opSlice(){
struct range{
Variant[Variant]* parent;
int i;
auto front()=> 
tuple(parent.keys[i],(*parent)[parent.keys[i]]);
auto popFront()=>i++;
auto empty()=>parent.keys.length<=i;
}
return range(&this.dct);
}
}

void main() {

dict d;

writeln("d: ", d);// ==> dict([])
writeln("d.keys: ", d.keys);
writeln("d.values: ", d.values);
writeln("d.keys.length: ", d.keys.length);
writeln("");

writeln("populating dict ");
d["hello"] = 2;
d[3.1] = 5;
d['c'] = 3.14;
d[2] = "freak";
d["mykey"] =

Re: How to make a struct containing an associative array to deeply copy (for repeated usage in foreach) ?

2024-03-15 Thread rkompass via Digitalmars-d-learn

@Monkyyy: I adopted your solution, it is perfect.

I only have one problem left:

The foreach loop with associative arrays has two cases:

`foreach(key, val; arr)` and `foreach(x; arr)`.
In the second case only the values are iterated.
With the present solution the iteration delivers (key, val) 
tuples.


Can this somehow be detected by the opSlice or is there another 
overloading

construct to be applied for this?


Addition: I noted that in the best matching 
[docs](https://dlang.org/spec/operatoroverloading.html#slice) 
only *ordinary arrays* are covered. Your solution would make a 
very nice addition for the case of associative arrays there. I 
learn't a lot from it.

Should I do an improvement request somewhere?





Re: How to make a struct containing an associative array to deeply copy (for repeated usage in foreach) ?

2024-03-14 Thread rkompass via Digitalmars-d-learn

Hello @monkyyy,

thank you for your help. I will study and try your code.

Meanwhile I have found that I can add this function into the 
struct:


```d
// postblit constructor, see
// 
https://stackoverflow.com/questions/38785624/d-struct-copy-constructor

this(this) {
string[string] ndct;
foreach (k; dct.keys)  // do a deep copy
ndct[k] = dct[k];
dct = ndct;
}
```



How to make a struct containing an associative array to deeply copy (for repeated usage in foreach) ?

2024-03-13 Thread rkompass via Digitalmars-d-learn
I want to make a custom dictionary that I may iterate through 
with foreach. Several times.
What I observe so far is that my dict as a simple forward range 
is exhausted after the first foreach and I have to deeply copy it 
beforehand.

With a simple associative array the exhaustion is not observed.
Is there a (hopefully simple) way to make this 
automatic/transparent? Of course

I need to use the struct.
Can I add a save member function? If yes: How? Or is there an 
operator that is used in the foreach initialization that I may 
overload in this struct?


My code:
```d
import std.stdio;
import std.string;
import std.typecons;

struct mydict {
   string[string] dct;

   @property bool empty() const {
  return dct.empty;
   }
   @property ref auto front() {
  return tuple(dct.keys[0], dct[dct.keys[0]]);
   }
void popFront() {
dct.remove(dct.keys[0]);
}
void opAssign(mydict rhs) {
writeln("--opAssign--");
foreach (k; rhs.dct.keys)  // do a deep copy
dct[k] = rhs.dct[k];
}
}

void main() {

mydict md, md2;
md.dct = ["h":"no", "d":"ex", "r": "cow"];
md2 = md; // md2.opAssign(md)
foreach (k, v; md)
writeln("key: ", k, "val: ", v);
writeln("--");
	foreach (k, v; md)   // does not work with md again, md is 
exhausted

writeln("key: ", k, "val: ", v);
}
```