Re: Exercise at end of Ch. 56 of "Programming in D"

2022-08-20 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 15 August 2022 at 03:19:43 UTC, johntp wrote:
Your solution worked. I guess it is a little unnatural to 
ignore the color.  I tried overriding the toHash() of Point, 
but I don't know enough D to get it to work.  I wonder if that 
could be a solution.


 Depends on what you're trying to do. Metadata unrelated to the 
value of the object i would ignore and not be part of hashing or 
comparisons. I've also done things for strings that held 
information like what original position in the array the data was 
(*for visually sorting testing*) and could yield me information 
while not interfering with the object/data in question.


 Though x+y as a hash seems terrible. I'd probably do 
((x+1000)**2)+y (assuming x and y are both going to be generally 
small ensuring the hash for location is unique. Then point(2,1) 
and point(1,2) have different hashes. But I'm not familiar with 
the exercise in question.


Re: Is there any implementation of a 128bit integer?

2022-07-10 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 8 July 2022 at 15:32:44 UTC, Rob T wrote:

https://forum.dlang.org/post/mailman.10914.1566237225.29801.digitalmars-d-le...@puremagic.com

In case someone comes across this old thread

https://dlang.org/phobos/core_int128.html


There was a discussion on this not long ago. Walter tried 
implementing it recently too, though I'm guessing he gave up.


https://forum.dlang.org/thread/wuiurmxvqjcuybfip...@forum.dlang.org

There's multiple libraries, one of which i wrote which tries to 
address this issue.


 One thing you can try doing is using BigInt, and then reducing 
to 128bit if/when you need to store the result. Apparently a 
number of compilers and back-ends already know how to handle 
128bit types (*and maybe larger*), but it's a matter of just 
putting it in the D frontend so it generates the appropriate 
calls.


https://github.com/d-gamedev-team/gfm/blob/master/integers/gfm/integers/wideint.d

https://github.com/rtcvb32/Side-Projects/tree/master/arbitraryint


Re: Infinite fibonacci sequence, lazy take first 42 values

2022-04-21 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 21 April 2022 at 04:36:13 UTC, Salih Dincer wrote:
My favorite is the struct range.  Because it is more 
understandable and personalized.  Moreover, you can limit it 
without using ```take()```.


 And it's inherently lazy, so no extra processing/calculation 
other than what's requested.


Re: Beginner memory question.

2022-04-19 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 16 April 2022 at 20:48:15 UTC, Adam Ruppe wrote:

On Saturday, 16 April 2022 at 20:41:25 UTC, WhatMeWorry wrote:

Is virtual memory entering into the equation?


Probably. Memory allocated doesn't physically exist until 
written to a lot of the time.


This might be very much an OS implementation issue.

 In linux using zram i've allocated and made a compressed drive 
of 8Gb which took only 200k of space (*the data i needed to 
extract compresses very well and only be temporarily used*) as 
such saying i have said space even though i have only 4Gb of ram 
didn't seem to matter. All unallocated pages are assumed 
null/zero filled, and if you zeroize a block it will unallocate 
the space. Makes extracting memory bomb archives (*Terabytes of 
zeroized files to fill space*) becomes rather safe in that 
environment.


 I would think if it's a small space (*say 32Mb or under, or some 
percentage like less than 1% of available memory*) it would 
allocate the memory and immediately return it. If it's larger it 
may say it allocated a range of memory (*as long as RAM+VM could 
hold it*) and allocate as needed. The CPU issues page faults when 
you try to access unallocated memory or memory that's not in at 
the time and passes it to a handler; It would then allocate the 
page(*s*) and then resume as though it was always allocated 
(*alternatively suspend until it has free ram, or save the 
program to disk for later resuming if there's no open 
ports/writable-files, or just crash the program with a segment 
fault*). It will make some things faster, and other things slower.


 If it tries to allocate all memory all at once, it may fill up 
RAM, then swap pages out, then fill RAM up again until the said 
space is successful. Which could be wasteful and slow. Or maybe 
it will allocate/reserve necessary Swap space and then allocate 
as much memory as it can before returning to the process.


 When you run out of ram and there's tons of swapping, a fast 
computer can turn into a brick for several minutes for the 
simplest of commands, at which changing swap settings can improve 
things.





Re: Can Enums be integral types?

2022-04-19 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 19 April 2022 at 13:20:21 UTC, Bastiaan Veelo wrote:

There is nothing that requires enum values to be unique, though:
```d
import std;
void main()
{
enum E {Zero = 0, One = 0, Two = 0}
writeln(E.Two); // Zero!
}
```


 True, but if you want it be useful they really need to be unique.


Re: Can Enums be integral types?

2022-04-18 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 17 April 2022 at 18:25:32 UTC, Bastiaan Veelo wrote:

On Saturday, 16 April 2022 at 11:39:01 UTC, Manfred Nowak wrote:
In the specs(17) about enums the word "integral" has no match. 
But because the default basetype is `int`, which is an 
integral type, enums might be integral types whenever their 
basetype is an integral type.


The reason is in [17.1.5](https://dlang.org/spec/enum.html):  
“EnumBaseType types cannot be implicitly cast to an enum type.”


The 'integral' or numeric value is used for uniqueness, not for 
math or some other effect, anymore than a primary int key in a 
SQL database is used to identify someone's birthday. (*Maybe 
that's the wrong analogy, comparing apples to oranges perhaps*).


 We will indeed have to explicitly cast to get around it, though 
it doesn't mean much. If you have say true=1, blue=2, what is 
blue+true? Numerically it's 3 but there's no value 3, or value 3 
could be say potato...


 Few years ago i made an enum type flag/library storage library, 
which would take an int and convert to N flags, or N flags to an 
int for compactly storing said values. But it's been quite a 
while, though i do recall a lot of casting and binary 
AND/OR/XOR's involved for it to work the way it was intended.


Re: How to implement this?

2022-04-07 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 8 April 2022 at 04:54:35 UTC, Era Scarecrow wrote:

Maybe it should be `cast(A*) &b.a`?


Confusing HTML entities bit on here. Probably just ignore it.


Maybe you are doing it backwards.

What if you had

```d
struct B {
A* a;
}

A[] arraylist;
```

then in the init append a new item to A's array list, before 
doing:

```d
arraylist ~= A();
a = &arraylist[$-1];
```

Alternately, add a static list and have the batch function access 
both lists?


```d
struct B {
static A[] list;
int a_index=-1;
}

a_index=list.length;
list ~= A();
```

then reference the item by list[a_index] or something similar.


Re: How to implement this?

2022-04-07 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 8 April 2022 at 04:31:45 UTC, Elvis Zhou wrote:

B b;
init(\&b);
structs ~= cast(A*)\&b;
//Error: copying `cast(A*)\& b` into allocated memory escapes a 
reference to local variable `b`


Maybe it should be `cast(A*) \&b.a`?




Re: Basic question about size_t and ulong

2022-03-22 Thread Era Scarecrow via Digitalmars-d-learn

On Wednesday, 23 March 2022 at 00:51:42 UTC, Era Scarecrow wrote:

On Tuesday, 22 March 2022 at 21:23:43 UTC, H. S. Teoh wrote:

We already have this:

import std.conv : to;
int x;
long y;
y = x.to!long;  // equivalent to straight assignment / cast
x = y.to!int;   // throws if out of range for int


 This particular usage can be useful, just not in the 
*automatic* sense i was meaning.


 Forgot to add this; for the more automatic mode maybe add a new 
tag say @autodowncast, which may add the .to!passingtype leaving 
said checks without needing to throw casts in a dozen places.


 Though i doubt Walter or Andrei would go for it.


Re: Basic question about size_t and ulong

2022-03-22 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 22 March 2022 at 21:23:43 UTC, H. S. Teoh wrote:

On Tue, Mar 22, 2022 at 09:11 PM, Era Scarecrow wrote:

[...]
I'd almost wish D had a more lenient mode and would do 
automatic down-casting, then complain if it *would* have 
failed to downcast data at runtime.

[...]


We already have this:

import std.conv : to;
int x;
long y;
y = x.to!long;  // equivalent to straight assignment / cast
x = y.to!int;   // throws if out of range for int


 At which point I might as well just do cast(int) on everything 
regardless **BECAUSE** the point of it is **NOT** having to add a 
bunch of conversions or extra bits to it.


 This particular usage can be useful, just not in the *automatic* 
sense i was meaning.


Re: Basic question about size_t and ulong

2022-03-22 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 22 March 2022 at 18:47:19 UTC, Ali Çehreli wrote:

On 3/22/22 11:28, Era Scarecrow wrote:
>   So when should you use size_t?

I use size_t for anything that is related to count, index, etc. 
However, this is a contested topic because size_t is unsigned.


 I don't see a problem with that. It's not like you can access 
-300 address space or index (*although making your own index 
function technically you could*). I'm actually surprised signed 
is the default rather than unsigned. Were negative numbers really 
that important in 16bit MS-DOS that C had to have signed as the 
default?


 This question is probably going off topic but still be 
interesting to know if there's an answer.



> Is it better to use int, long, size_t?

D uses size_t for automatic indexes during foreach, and as I 
said, it makes sense to me.


Otherwise, I think the go-to type should be int for small 
values. long, if we know it won't fit in an int.


 Mhmm. More or less this is what i would think. I'm just getting 
sick of either numbers i return that i have to feed into indexes 
and it complains it's too big, or putting what is going to be a 
smaller number in an array because the array type is too small. 
Casting or using masks may resolve the issue, but it may crop up 
again when i make a change or try to compile on a different 
architecture.


 My usual writing at this time is doing my work on a 64bit 
laptop, but sometimes i go and run the 32bit dmd version in 
windows on a different computer and checking for differences 
between ldc/gdc and dmd for if the code complains. I see more and 
more why different versions of compilers/OSes is a pain in the 
ass.


 I'd almost wish D had a more lenient mode and would do automatic 
down-casting, then complain if it *would* have failed to downcast 
data at runtime.


> Or is it better to try to use the smallest type you need that 
> will fulfill the function's needs and just add to handle 
> issues due to downcasting?


That may be annoying, misleading, or error-prone because 
smaller types are converted at least to int in expressions 
anyway:


 Yeah, and i remember reading about optimization in GCC where 
doing smaller types can actually be slower, much like in some 
architectures having offsets in memory address results in a speed 
penalty for non-aligned data.


But yeah, if your function works on a byte, sure, it should 
take a byte.


Expect wild disagreements on this whole topic. :)


Though internally it may be an int...



Re: Basic question about size_t and ulong

2022-03-22 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 18 March 2022 at 23:01:05 UTC, Ali Çehreli wrote:
P.S. On a related note, I used to make the mistake of using 
size_t for file offsets as well. That is a mistake because even 
on a 32-bit system (or build), file sizes can be larger than 
uint.max. So, the correct type is long for seek() so that we 
can seek() to an earlier place and ulong for tell().


 Perhaps we should back up and ask a different question. I've 
been working on adaptation of Reed Solomon Codes, and i keep 
getting thrown with casting errors, to the point where i just 
want to make everything size_t to make the errors go away.


 So when should you use size_t? Is it better to use int, long, 
size_t? Or is it better to try to use the smallest type you need 
that will fulfill the function's needs and just add to handle 
issues due to downcasting?





Re: Nested Classes with inheritance

2022-03-20 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 20 March 2022 at 05:44:44 UTC, Salih Dincer wrote:

On Sunday, 20 March 2022 at 01:28:44 UTC, Era Scarecrow wrote:
Inheritance and Polymorphism is one of the hardest things to 
grasp mostly because examples they give in other books of 
'objects' is so far unrelated to software that it doesn't 
really compare.


You are right, difficult model yet so abeyant. Moreover, there 
is a lot of freedom given in D. I think OOP alone is not worth 
5 cents without design patterns.


OOP can be aggressive like a dog. I think D should be a little 
more rigid on OOP.


 Actually one good example for Objects is a game, specifically 
Magic The Gathering.


 MTG is a card game, of that everyone knows. But there's so many 
effects. When it comes into play, when it goes to the graveyard, 
if it gets exiled, cost to cast it from the gaveyard/exile, 
tapping, untapping, paying and tapping to do an ability, milling 
cards, drawing cards, scrying cards, attacking without untapping, 
deathtouch, islandwalk. Then there's enchantments, equipment, 
passive always active abilities (*all your creatures get +0/+1*) 
and a myriad of other things.


 Now with that out of the way making a base 'card' with all it's 
actions as a mere interface and then making each card 
individually OR inheriting from a base card using polymorphism 
would be a great way to do things.


 Going the one route

```d
abstract class Card {
  string name, description, picture, quote;
  string types; //Green Insect Token, Indestructable, etc
  int basePower, baseToughness;
  Card[] enchantments; //affecting this card specifically
  static Card[] globalEnchantments, //affecting all cards
playerEnchantments; //affecting only my own cards

  //When a card leaves play remove from enchantments list.
  //globals will just remove from the static list
  void purgeCard(Card target);

  //calculate power
  int power() {
int t;
foreach(x; enchantments) {t += x.basePower;}
foreach(x; globalEnchantments) {t += x.basePower;}
foreach(x; playerEnchantments) {t += x.basePower;}
return t;
  }
  int toughness();
  //etc for a base class with expected functional hooks for 
combat, instants etc.

}

class Wurm : Card {
  this() {
name = "Greater Wurm";
quote = "When the Wurm comes, all flee it's destruction";
basePower = 3;
baseToughness = 3;
  }

  //no tap or other abilities, left blank
}

class Beetle : Card {
  Card original;

  this(Card target) {
description = "The target card becomes a 0/1 indestructible 
beetle";

types="Green,Insect,Indestructible,Enchantment";
//code to swap original and target
  }

  ~this() {//unswap  }

  override int power(){return 0;}
  override int toughness(){return 1;}
}

class Sword : Card {
  this() {
name="Soldier's sword";
description = "Enchanted creature gets +2/0";
types="Enchantment,Equipment,Colorless";
basePower = 2;
  }

  this(Card target) {
target.enchantments ~= this;
  }
}
```

Here you have a base card to work with, a monster, enchantment 
and an override.


Re: Nested Classes with inheritance

2022-03-19 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 19 March 2022 at 12:23:02 UTC, user1234 wrote:
I think OP is learning OOP. His error looks like that for the 
least.


 True. Looking at the code it shouldn't spaghetti in on itself 
infinitely and is basically clean in his intent.


Inheritance and Polymorphism is one of the hardest things to 
grasp mostly because examples they give in other books of 
'objects' is so far unrelated to software that it doesn't really 
compare. `"An object is like a book which you can read and turn 
the page..."` but then can't tear or burn or hand to a friend or 
put on the shelf upside down, or put your coffee on top of while 
you surf on the web leaving a ring on the book.


 Or comparing inheritance and polymorphism to animals but other 
than overriding the output function to 'meow' or something 
doesn't really help, while comparing to say a bank account 
management or something would be much better.


Maybe I'm just venting on the C++ Primer from 1997 that just 
annoyed me to hell.





Re: Nested Classes with inheritance

2022-03-18 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 19 March 2022 at 00:16:48 UTC, user1234 wrote:
That crashes because of the creation of `Bar b` member, which 
itself has a Bar b member, which itself...


 Mhmm... So There's Foo with Bar b, which has Bar b which has Bar 
b which... just keeps going over and over again.


 It appears to me that it only crashes when you fully run out of 
memory then. Much like a function calling itself and you exhaust 
all your stack space.


 I'd suggest avoiding self-inheritance. No one wants to be their 
own Grandpa


Re: How to exclude function from being imported in D language?

2022-03-17 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 8 March 2022 at 22:28:27 UTC, bauss wrote:
What D just needs is a way to specify the entry point, in which 
it just defaults to the first main function found, but could be 
any function given.


 Which is similar to what Java does.

 When i was first learning Java in a company i would make main() 
and have it run all the unittests of that particular module, then 
have a different file that actually combined all the tools 
together to run the program. Though when making the jar I'd 
specify which one actually was needed. But this was... 10 years 
ago.







Re: static init c struct with array filed

2022-03-16 Thread Era Scarecrow via Digitalmars-d-learn

On Wednesday, 16 March 2022 at 11:27:20 UTC, user1234 wrote:

assuming the c library takes by reference


 My experience all arrays are effectively just pointers, and the 
brackets/length is only really applicable to stack allocated 
fixed length allocations. In my own C projects i always used 
pointers to pass arrays around.


 There's also static construction, though i don't see how that 
improves anything


https://dlang.org/spec/module.html#staticorder



Re: Strange behavior of iota

2022-02-15 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 15 February 2022 at 22:24:53 UTC, bachmeier wrote:
On Tuesday, 15 February 2022 at 22:02:13 UTC, Adam D Ruppe 
wrote:

for(a = v.length; a > cast(size_t) -1, a += -1)


After looking at the documentation and seeing CommonType!(int, 
uint) is uint, I have to say that iota's behavior doesn't make 
much sense.


Unless it's almost always intended to go up and stay positive?

Not that it can't be modified to take all those cases in, hitting 
64bit there's little reason you can't just use long,long for the 
arguments.


Re: How to verify DMD download with GPG?

2022-02-14 Thread Era Scarecrow via Digitalmars-d-learn
On Tuesday, 8 February 2022 at 10:17:19 UTC, Ola Fosheim Grøstad 
wrote:
I do like the idea that a hacker cannot change the signature 
file if gaining access to the web/file hosts, but how to verify 
it in secure way?


 For Linux sources there's MD5 and SHA-1 hashes i believe. If you 
have two or three hashes for comparison, the likelyhood of 
someone changing something without those two changing seems 
VRY low.


Re: how to handle very large array?

2022-02-12 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 10 February 2022 at 01:43:54 UTC, H. S. Teoh wrote:
On Thu, Feb 10, 2022 at 01:32:00AM +, MichaelBi via 
Digitalmars-d-learn wrote:

thanks, very helpful! i am using a assocArray now...


Are you sure that's what you need?


 Depends. if you do say TYPE[long/int] then you have effectively 
a sparse array, if there's a few entries (*Say a hundred million 
or something*) it will probably be fine.


 Depending on what you're storing, say if it's a few bits per 
entry you can probably use bitarray to store differing values. 
The 10^12 would take up 119Gb? That won't work. Wonder how 
25Gb was calculated.


 Though data of that size sounds more like a database. So maybe 
making index-access that does file access to store to media might 
be better, where it swaps a single 1Mb block (*or some power^2 
size*) doing read/writes. Again if the sparseness/density isn't 
heavy you could then get away using zram drive and leaving the 
allocating/compression to the OS so it all remains in memory 
(*Though that's not going to be a universal workaround and only 
works on linux*), or doing compression using zlib on blocks of 
data and swapping them in/out handled by the array-like object, 
but that i'm more iffy on.


 If the array is just to hold say results of a formula, you could 
then instead do a range with index support to generate the 
particular value and uses very little space (*depending on the 
minimum size of data needed to generate it*) though it may be 
slower than direct memory access.




Re: How to work with hashmap from memutils properly?

2022-02-11 Thread Era Scarecrow via Digitalmars-d-learn
On Friday, 11 February 2022 at 02:43:24 UTC, Siarhei Siamashka 
wrote:
Though this strange benchmark is testing performance of an LRU 
with ... wait for it ... 10 elements, which makes using 
hashmap/dict/AA a completely ridiculous idea.


 Hmmm... if it's static data i can see maybe a enum hashmap with 
keynames, and then it resolved at compile-time to fixed values 
maybe (*for AA*).


 I remember for a C project i faked a hashmap/AA by having sorted 
key/value pairs and then doing a binary lookup. I also have a D 
static-AA i made which will make an array large enough for all 
the statically known values at compile-time, though i don't know 
if anyone uses it.


 Regardless, 10 items really is a bad test size; Big enough to 
show it might be working but not big enough for performance tests 
(*at least with 2Ghz+ computers today; Maybe on a Pi where you 
can drop it to 30Mhz then you could get somewhat useful results 
from a smaller dataset*).





Re: number ranges

2022-01-18 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 17 January 2022 at 22:28:10 UTC, H. S. Teoh wrote:
This will immediately make whoever reads the code (i.e., myself 
after 2 months :D) wonder, "why +1?" And the answer will become 
clear and enlightenment ensues. ;-)


 In those cases i find myself rewriting said code. Generally to 
say **for(int i=1; i<=5; i++)** or something, where it includes 
the last one but doesn't add oddities that doesn't explain the 
magic numbers or odd +1.


 Then again the big issue *probably* comes from people coming 
from BASIC of some description where the **FOR A=1 TO 5**, where 
index starts at 1 and includes the number listed; And you aren't 
given other conditions to test against. It really does take a 
little getting used to.


 Maybe we don't use Qbasic or 8bit MSBASIC much anymore, but 
Visual Basic and legacy code grandfathers those in, and maybe a 
few other interpreted languages too.


Re: Throw stack trace from program kill

2022-01-16 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 16 January 2022 at 18:03:53 UTC, Paul Backus wrote:
On POSIX, you can use the `sigaction` function to install a 
signal handler for `SIGINT`, the signal generated by CTRL+C. To 
terminate the program with a stack trace, simply have the 
signal handler `throw` an `Error`.


 I never quite got deep enough to start using these. Though i can 
tell a lot of programs take advantage of this in different ways. 
Example, optipng or jpegoptim will likely have a catch and if 
it's killed it would do cleanup then quit.


 **So**, normally said image optimizers create a new file as 
**somefile.jpg.tmp12345**, and if it is uninterrupted 
**somefile.jpg** is deleted and **somefile.jpg.tmp12345** is 
renamed to the original file; On the other hand interrupted 
execution would close the temp file and then delete it before 
returning control, leaving the original file untouched.



 As for how to handle things outside of cleanup, I'm not quite so 
sure. I don't see why you couldn't do a stacktrace or core dump a 
file with the current state you could then look at (*and maybe 
attach a debugger*).


Re: Dynamic array or not

2022-01-16 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 16 January 2022 at 15:32:41 UTC, Salih Dincer wrote:
If count is not equal to 8 I get weird results! The reason of 
course, is the free():

// [93947717336544, 1, 2, 3, 4, 5, 6]


I wonder if you're seeing something you're not suppose to as part 
of the internals; Typically malloc for 16bit allocation to my 
understanding worked a little different, where you had 2 bytes 
BEFORE the memory block which specified a size, and if it was 
negative then it was a size of free memory (*This allowed a 64k 
block to have multiple allocations and de-allocations*). So you 
might have: **64, [64 bytes of data], -1024, [1024 unallocated 
memory],32,[32 bytes of allocated data, etc]**. in the above if 
you deallocated the 64 byte block, it would just become -1090 
(*merging the next free block, 1024+64+2*). This would also 
explain why double freeing would break since it would be 
referring to free size and throw a fit, or it was no longer a 
length malloc/free could use.


Also malloc if it returns anything guarantees at least that size, 
you might ask for 7 bytes but get 16 and the others is just 
ignored.


Is this how the 32/64bit malloc/free work? Not sure, it wouldn't 
be unreal for Java and others to pre-allocate say a megabyte and 
then quickly give/manage a smaller block and extending or getting 
another block later.


Though I'm sure others here have given better/more exact on 
internals or how allocation works in D.


Re: mixin does not work as expected

2022-01-05 Thread Era Scarecrow via Digitalmars-d-learn

On Wednesday, 5 January 2022 at 08:40:15 UTC, rempas wrote:
I'm trying to use mixins and enums to "expand" code in place 
but the results are not what I expected and I'm getting an 
weird error. I have created the smallest possible example to 
reproduce the error and it is the following:


 Back when i was working on bitmanip with the bitfield mixin, i 
had to re-write it with newlines and tabs, and then have it 
expand it as text and output it and see what the actual output 
was before i could debug it.


 The output was... informative.

 That said, rolling your own mixins should really be the last 
resort. You're dumping a lot into a single line of code you can't 
trace, follow, debug, or look at.


Re: print ubyte[] as (ascii) string

2021-12-30 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 30 December 2021 at 09:34:27 UTC, eugene wrote:
The buffer contains (ascii) string terminated with '\n'. In 
order to print it not as an array of numbers (buf is 1024 bytes 
long), but as usual string I do


 Few years ago i asked a similar question, not to do UTF-8 but to 
do Ascii. I was working on a tool for Morrowind after determining 
the strings were not variable in length and instead much like 
ascii fixed at 256 combinations.


 The answer i ended up with was a quick conversion to a UTF in 
order to print it. Seems you might have to convert to Latin-1.


 Here's the old thread.

 
https://forum.dlang.org/thread/lehgyzmwewgvkdgra...@forum.dlang.org





Re: AA and struct with const member

2021-12-29 Thread Era Scarecrow via Digitalmars-d-learn
On Wednesday, 29 December 2021 at 01:11:13 UTC, Stanislav Blinov 
wrote:
Because opIndexAssign cannot distinguish at compile time 
between initialization and assignment:


```d
Stuff[Key] aa;
aa[key] = Stuff(args); // ostensibly, initialization
aa[key] = otherStuff;  // assignment to existing value
```

Same syntax, different behavior. This can only be caught at 
runtime. `require` and `update` though should be able to pull 
this off, and that they don't is a bug.


So i wonder if const and immutable would have different behaviors 
then.


While you shouldn't be able to explicitly change a const item 
within a struct, replacing the whole struct i would think would 
be okay, on the basis that you're basically throwing whole old 
item away (*and may be equal to what you'd do with say swap*).


 Immutable on the other hand may want to refuse as it should 
basically have a lifetime of the array? Though if you can delete 
the item and then just add it in again it's a longer version of 
the same thing, just depends on if anything is using/referencing 
it or not. And casting away the constness is easy enough so maybe 
it won't be different.


 Though if it's a basic type it seems unlikely it would need a 
longer lifetime, and if it's a reference or array it already is 
separated from the struct and needs no such protection for the 
pointer.


 I don't know. I remember odd behavior with const/non-const stuff 
before.


Re: AA and struct with const member

2021-12-28 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 28 December 2021 at 07:51:04 UTC, frame wrote:
On Tuesday, 28 December 2021 at 01:45:42 UTC, Era Scarecrow 
wrote:



 Success!

 So i summarize, either work with a pointer, or drop the 
const...


Of course casting the const away was the first thing I did but 
I think this is not very clean :D


Well the next step up would be if the key does exist, you could 
then memcpy the result... which can have issues with non-native 
basic types.


 Probably better to make data private vs making it const. I tend 
to use const far more as input arguments to help denote it won't 
change references and less for elements in a struct. That or make 
it a class? I'm not sure.


Re: AA and struct with const member

2021-12-27 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 27 December 2021 at 19:38:38 UTC, frame wrote:
I feel stupid right now: One cannot assign a struct that 
contains const member to AA?


Error: cannot modify struct instance ... of type ... because it 
contains `const` or `immutable` members


This is considered a modification?
```d
struct S
{
  const(int) a;
}

S[string] test;
test["a"] = S(1);
```

Whats the workaround for that?


const/immutable members are to be set/assigned instantiation. 
Most likely the problem is a bug and sounds like


a) the struct doesn't exist in the AA, so it creates it (with a 
default)

b) It tries to copy but contains a const and thus fails

Passing a pointer will do you no good, since structs are likely 
to be on the stack.


So let's try opAssign.

```d
  auto ref opAssign(S s) {
this=s;
return this;
  }
```

So we get
```
'cannot modify struct instance `this` of type `S` because it 
contains `const` or `immutable` members'.

```

Alright let's look at the members we can work with.
https://dlang.org/spec/hash-map.html

I don't see an 'add' but i do see a 'require' which will add 
something in. So we try that.


test.require("a", S(1));

```
Now we get:
Error: cannot modify struct instance `*p` of type `S` because it 
contains `const` or `immutable` members
test.d(??): Error: template instance `object.require!(string, S)` 
error instantiating

```

Hmmm it really doesn't like it. Finally we can fake it. Let's 
make a mirror struct without the const, for the purposes of 
adding it.


```d
struct S
{
  const(int) a;
}

struct S2
{
  int a;
}

S[string] test;
cast(S2[string])test = S2(1);
```
```
Error: `cast(S2[string])test` is not an lvalue and cannot be 
modified

```

Well that's not going to work. Let's make it a pointer and 
allocate it instead.


```d
S*[string] test;
test["a"] = new S(1);
```

 Success!

 So i summarize, either work with a pointer, or drop the const...


Re: How to print unicode characters (no library)?

2021-12-27 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 27 December 2021 at 07:12:24 UTC, rempas wrote:

On Sunday, 26 December 2021 at 21:22:42 UTC, Adam Ruppe wrote:
write just transfers a sequence of bytes. It doesn't know nor 
care what they represent - that's for the receiving end to 
figure out.



Oh, so it was as I expected :P


 Well to add functionality with say ANSI you entered an escape 
code and then stuff like offset, color, effect, etc. UTF-8 
automatically has escape codes being anything 128 or over, so as 
long as the terminal understand it, it should be what's handling 
it.


 https://www.robvanderwoude.com/ansi.php

 In the end it's all just a binary string of 1's and 0's.


Re: First time using Parallel

2021-12-26 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 26 December 2021 at 15:36:54 UTC, Bastiaan Veelo wrote:
On Sunday, 26 December 2021 at 15:20:09 UTC, Bastiaan Veelo 
wrote:
So if you use `workerLocalStorage` ... you'll get your output 
in order without  sorting.


Scratch that, I misunderstood the example. It doesn't solve 
ordering. The example works because order does not matter for 
addition. Sorry for spreading wrong information.


 Maybe. I did notice that the early stuff a bunch of output was 
getting mixed up;

```
0x  0x  0x  0x  0x  0x  0x  
0x35/*33,   /*, /*, /*115,  /*, /*3,   /

*9, /*3410*/*/
```

Which i assume it's doing several small write calls and different 
threads are acting at the same time. So if I do an appender 
string and then outputted the string as a single bock that would 
likely go away; Though it wouldn't help with ordering.


 **IF** I didn't have to wait so long to get results and wanted 
them all at once in order, I would write the results to the 
offsets of an array and then output it all at once at the end 
(*and since they'd have their own offset to write to you don't 
need to lock*).


Re: Double bracket "{{" for scoping static foreach is no longer part of D

2021-12-26 Thread Era Scarecrow via Digitalmars-d-learn
On Wednesday, 22 December 2021 at 16:30:06 UTC, data pulverizer 
wrote:
On Wednesday, 22 December 2021 at 16:10:42 UTC, Adam D Ruppe 
wrote:
So OUTSIDE a function, static foreach() {{ }} is illegal 
because a plain {} is illegal outside a function.


But INSIDE a function, static foreach() {{ }} is legal, but it 
isn't magic about static foreach - it is just a body with its 
optional {} present as well as a scope statement inside.


Just seen this. Thanks - I should have been more patient.


 I thought the {{ }} was mostly related to static if, namely that 
when you do static if, the block contents is added in scope; So 
if you needed a scope you'd do the second bracket as the 
outer/first one is stripped out.


 I need to once again re-familiarize myself more with D. It's 
been too long.


Re: First time using Parallel

2021-12-26 Thread Era Scarecrow via Digitalmars-d-learn
On Sunday, 26 December 2021 at 11:24:54 UTC, rikki cattermole 
wrote:
I would start by removing the use of stdout in your loop kernel 
- I'm not familiar with what you are calculating, but if you 
can basically have the (parallel) loop operate from (say) one 
array directly into another then you can get extremely good 
parallel scaling with almost no effort.


 I'm basically generating a default list of LFSR's for my Reed 
Solomon codes. LFSR can be used in Pseudo random numbers, but in 
this case it's to build a Galois field for Error Correction.


 Using it is simple, you need to know a binary number that when 
xored when a 1 bit exits the range, will result in the maximum 
numbers (*excluding zero*). So if we do 4 bits (xor of 3) you'd 
get:


```
 0 0001 -- initial
 0 0010
 0 0100
 0 1000
 1 0011 <- 
 0 0110
 0 1100
 1 1011 <- 1000
 1 0101 <- 0110
 0 1010
 1 0111 <- 0100
 0 1110
 1  <- 1100
 1 1101 <- 1110
 1 1001 <- 1010
 1 0001 <- 0010 -- back to our initial value
```
 As such the bulk of the work is done in this function. Other 
functions leading to this mostly figure out what value should be 
according to some rules i set before trying to work (*quite a few 
only need 2 bits on*).


```d
bool testfunc(ulong value, ulong bitswide) {
ulong cnt=1, lfsr=2,up=1UL

First time using Parallel

2021-12-25 Thread Era Scarecrow via Digitalmars-d-learn
 This is curious. I was up for trying to parallelize my code, 
specifically having a block of code calculate some polynomials 
(*Related to Reed Solomon stuff*). So I cracked open std.parallel 
and looked over how I would manage this all.


 To my surprise I found ParallelForEach, which gives the example 
of:


```d
foreach(value; taskPool.parallel(range) ){code}
```

Since my code doesn't require any memory management, shared 
resources or race conditions (*other than stdout*), I plugged in 
an iota and gave it a go. To my amazement no compiling issues, 
and all my cores are in heavy use and it's outputting results!


 Now said results are out of order (*and early results are 
garbage from stdout*), but I'd included a bitwidth comment so 
sorting should be easy.

```d
0x3,/*7*/
0x11,   /*9*/
0x9,/*10*/
0x1D,   /*8*/
0x5,/*11*/
0x3,/*15*/
0x53,   /*12*/
0x1B,   /*13*/
0x2B,   /*14*/
```
etc etc.

 Previously years ago I remember having to make a struct and then 
having to pass a function and a bunch of stuff from within the 
struct, often breaking and being hard to get to even work so I 
didn't hardly touch this stuff. This is making outputting data 
MUCH faster and so easily; Well at least on a beefy computer and 
not just some chromebook I'm programming on so it can all be on 
the go.



 So I suppose, is there anything I need to know? About shared 
resources or how to wait until all threads are done?


Re: struct inside struct: Is there a way to call a function of the outside struct from the inner struct?

2019-07-06 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 6 July 2019 at 12:33:00 UTC, berni wrote:
Now I found this: 
https://forum.dlang.org/thread/eobdqkkczquxoepst...@forum.dlang.org


Seems to be intentional, that this doesn't work. In my case I'm 
able to move d() into the outer struct...


You'll need a pointer to the outer struct, or run it in an 
function where it then passes a pointer of data that's seen in 
the scope i believe.


Re: Why are immutable array literals heap allocated?

2019-07-05 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 5 July 2019 at 16:25:10 UTC, Nick Treleaven wrote:
Yes, I was wondering why the compiler doesn't statically 
allocate it automatically as an optimization.


 Which i would think it could, but silently adds .dup to the end 
as it points to a unnamed memory block of N size. Or if it's 
immutable i would point to the same shared data.


Re: Bitfields

2019-05-21 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 21 May 2019 at 17:16:05 UTC, Russel Winder wrote:
As far as I can see std.bitmanip only caters for 8, 16, 32, and 
64 bit long bitfields.


 I worked on/with bitfields in the past, the limit sizes is more 
or less for natural int types that D supports.


 However this limitation is kinda arbitrary, as for simplicity it 
relies on shifting bits, going larger or any byte size is 
possible depending on what needs to be stored, but ti's the speed 
that really takes a penalty when you aren't using native types or 
you have to do a lot of shifting to get the job done.


 What's the layout of what you need? I'll see if i can't make 
something that would work for you.


 Would be better if you can use a object that breaks the parts 
down and you can actually fully access those parts, then just 
re-store it into the limited space you want for storage, which 
then would be faster than bitfields (although not by much)


Re: Is using floating point type for money/currency a good idea?

2019-05-20 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 20 May 2019 at 12:50:29 UTC, Simen Kjærås wrote:
If you're worried that $92_233_720_368_547_758.07 (long.max) is 
not enough money for your game, I'd note that the entire 
current world economy is about a thousandth of that. Even so, 
there's std.bigint.BigInt, which has no set limit, and can in 
theory represent every whole number up to about 256^(2^64), or 
about 4 quintillion digits. You will encounter other problems 
before this limit becomes an issue.


 Yes at that point BigInt would be a better solution.

 I made a NoGC fixed int type that would allow you to have any 
sized int (once defined, Cent anyone?) and only use stack 
data/space for calculations; It did fairly decently 
performance-wise (still need to do the special assembly 
instructions for supported 128/256 cryto stuff for faster speed 
on supported hardware), but otherwise it worked. Truthfully 
working around with the registers and carry and other details can 
be a chore, though only on divide, everything else is 
straightforward.


 H going with numbers very very very very large would 
probably be more idle games more than anything else. Though I'm 
not sure if many need actual precision (as after you're over say 
E+10 over what you used to have, what you buy isn't really 
relevant til it starts getting only e+4 off; So even rounding 
errors wouldn't matter much. Not sure what format those games 
use. Tempted to believe it's a two pair int (one for exponent and 
one for currency, letting you get E+4billion, would be easy to 
code and do calculations overall, i don't see BigInts being used 
everywhere)


Re: Is using floating point type for money/currency a good idea?

2019-05-20 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 20 May 2019 at 11:50:57 UTC, Dennis wrote:
For a simple game, I think it's the easiest to just store an 
integer of cents (or the lowest amount of currency possible).


 Back in 2003 i did this very thing for when creating my C 
program for suggesting and helping with credit card payments, so 
it could make suggestions of which ones to pay off in what 
amounts, as i found using float to be too unreliable.



Another option could be to use floats to a very limited state, 
say after calculations (cay calculating interest) you could 
convert to an int then back to float. (money = cast(float) 
(cast(int)money*100) /100)) to get as clean/close to the proper 
value as you can. (Though you may still end up off by a fraction 
of a penny).


Josh's suggestion is close:

 writefln("%f", cast(float) i / 100.0);

The format should be "%f.2", which should do fine for your 
purposes as it would round to the correct value (unless you're 
working with numbers over say 16Million, then float will start 
giving issues, and doubles might be needed).



A third option could be to make your own type, a fixed precision 
int, which would be a struct that you determine precision by some 
arbitrary means (bits, digits, etc) and then do your tostring 
method to handle output appropriately. Not the best, but it would 
also work.


I suppose the last method would be to make a BCD (Binary Coded 
Decimal) type. This is used in calculators (and 8bit BASIC) where 
it's a 6 byte value where every nibble (4bits) is a digit. The 
first byte is the exponent (+/- 128) and 5 bytes store 10 digits 
of data allowing VERY precise values. But the overhead and setup 
seems a bit impractical outside of emulation.



The easiest i would think is just consider the currency a whole 
number, and if you are working with fractions, only do the 
conversion when printing/outputting.


Re: Hookable Swap

2019-05-19 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 20 May 2019 at 02:18:51 UTC, Era Scarecrow wrote:

Here's some outputs if you are interested


 Noticing how Heapify moves a large portion of areas more or less 
in their location, doing heapify before binary insertion sort 
lowers how much moving goes on quite a bit. Doing 2 heapify's 
dropped in my test down a lot.


 An interesting idea to throw in partial ideas from other sorts 
to make a more efficient one.


 So binary insertion sort heap(min): BISH

 Some experiments to be done, but looks interesting.

 Sorting a 44 character string/array (quick brown fox) took about 
250 comparisons and similar moves. I removed a bunch that are 
likely an assert in the final check/run of the heapify.



Original binary insertion sort was 154 comparisons and 466 moves.
Bish sort was 259 & 284 moves

https://pastebin.com/raw/rJ1aWmD1


Re: Hookable Swap

2019-05-19 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 19 May 2019 at 06:13:13 UTC, Era Scarecrow wrote:
Making a struct type/array that visually outputs and displays 
compares/mutations of a type. While using the library sorting 
functions (which relies on std.algorithm.mutation.swap


Well been having fun with sorting and more of this; Added a 
function that with compares it checks if the indexes have changed 
(saying the item has changed) and acts accordingly. So now the 
compared spots look right.


Here's some outputs if you are interested

https://pastebin.com/raw/QWn6iDF3


Re: Same code different behaviour in debug & release mode?

2019-05-19 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 19 May 2019 at 17:55:10 UTC, Robert M. Münch wrote:
It seems that the debugger has quite some problem when the code 
was compiled with optimization and debug-information.


 I remember having similar problems in a C program years ago, 
ended up just releasing the code unoptimized and with asserts 
still in place. Worked fairly well, with only a few cases making 
the program stall or fail (having to do with the data having 
sections that are backwards).


 A bit annoying...

 Though i did notice a big difference between code failing during 
running yet worked fine in a debugger (same executable). Some of 
this having to do with uninitialized values (which shouldn't be a 
problem in D) or the debugger using the stack and making it 
workable from some random pointer i could never find.


Hookable Swap

2019-05-18 Thread Era Scarecrow via Digitalmars-d-learn
Making a struct type/array that visually outputs and displays 
compares/mutations of a type. While using the library sorting 
functions (which relies on std.algorithm.mutation.swap) it 
doesn't call opAssign and doesn't pass through the struct. (which 
also changes the indexes which it shouldn't).


Is there a struct override or function that forces it to be 
called for swapping or changes? Or do i simply have to detect 
changes between compares?


Re: Speed of math function atan: comparison D and C++

2018-03-04 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 5 March 2018 at 05:40:09 UTC, rikki cattermole wrote:
atan should work out to only be a few instructions (inline 
assembly) from what I've looked at in the source.


Also you should post the code you used for each.


 Should be 3-4 instructions. Load input to the FPU (Optional? 
Depends on if it already has the value loaded), Atan, Fwait 
(optional?), Retrieve value.


 Off hand that i remember, FPU instructions run in their own 
separated space and should more or less take up only a few cycles 
by themselves to run (and also run in parallel to the CPU code).


 At which point if the code is running half the speed of C++'s, 
that means probably bad optimization elsewhere, or even the 
control settings for the FPU.


 I really haven't looked that in depth to the FPU stuff since 
about 2000...


Re: Caesar Cipher

2018-02-11 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 11 February 2018 at 18:01:20 UTC, Mario wrote:

char[] encrypt(char[] input, char shift)
{
auto result = input.dup;
result[] += shift;
return result;
}


What's wrong? I mean, I know that z is being converted into a 
symbol, but how should I fix this?


 If you take Z (25) and add 10, you get 35. You need to have it 
identify and fix the problem, namely removing 26 from the result.


 Assuming anything can be part of the input (and not just 
letters), we instead do the following:


auto result = input.dup;
foreach(ref ch; result) {
if (ch >= 'A' && ch <= 'Z')
ch = ((ch+shift-'A') % 26) + 'A';
}

 Alternatively if you do where every character is defined for 
switching (and those not changing are the same) it could just be 
a replacement & lookup.




Re: ESR on post-C landscape

2017-11-14 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 14 November 2017 at 06:32:55 UTC, lobo wrote:
And I fixed it all right – took me two weeks of struggle. After 
which I swore a mighty oath never to go near C++ again. 
...[snip]"


 Reminds me of the last time I touched C++. A friend wanted help 
with the Unreal Engine. While skeptical the actual headers and 
code I was going into were... really straight forward. #IfDef's 
to encapsulate and control if something was/wasn't used, and 
simple C syntax with no overrides or special code otherwise.


 But it was ugly... it was verbose... it was still hard to find 
my way around. And I still don't want to ever touch C++ if I can 
avoid it.


Re: BinaryHeap as member

2017-11-13 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 13 November 2017 at 16:26:20 UTC, balddenimhero wrote:
In the course of writing a minimal example I removed more than 
necessary in the previous pastebin (the passed IntOrder has not 
even been used). Thus here is the corrected one: 
https://pastebin.com/SKae08GT. I'm trying to port this to D.


 Throwing together a sample involves wrapping the value in a new 
value. Still the idea is put across...


 Not sure if this is the best way to do this, but only takes a 
little dereferencing to access the value.


Compiled w/DMD v2.069.2

[code]
import std.container.binaryheap;
import std.range : iota;
import std.array;
import std.stdio;

void main()
{
  int len = 10;
int[] a = iota(len).array;

auto foo = new WeightedHeap!int([0,2,4,6,8], a);

  foreach(v; foo.h)
writeln(v.weight, "\t", *v.v);
}

struct WeightedHeap(T) {
  this(int[] order, T[] arr) {
foreach(i, ref v; arr) {
  a ~= E(order[i%$], &v);
}

h = BinaryHeap!(E[])(a);
  }

  E[] a;
  BinaryHeap!(E[]) h;
//  alias h this;

  static struct E {
int weight;
T* v;
//alias v this;

int opCmp(E a) const {
  return a.weight-weight;
}
  }
}
[/code]

Output:
Weight  Value
0   5
0   0
2   1
2   6
4   7
4   2
6   3
6   8
8   4
8   9


Re: find difference between two struct instances.

2017-07-21 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 21 July 2017 at 21:03:22 UTC, FoxyBrown wrote:
Is there a way to easily find the differences between to struct 
instances? I would like to report only the differences


e.g.,

writeln(s1 - s2);

prints only what is different between s1 and s2.


 This is entirely dependent on the structs in question, you can't 
just subtract any struct from another struct unless it knows how 
to do it.


 Depends on what the structures hold. You'll probably have to 
either make an opSub, a function to call opBinary!"-", or do 
opCmp which returns which is higher/lower (and may be as simple 
as subtraction).





Re: How to init immutable static array?

2017-07-18 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 18 July 2017 at 07:30:30 UTC, Era Scarecrow wrote:

  my_array[i]=some calculations(based on constants and n)


i meant: tmp[i]=some calculations(based on constants and n)


Re: How to init immutable static array?

2017-07-18 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 18 July 2017 at 07:20:48 UTC, Miguel L wrote:
Hi, I need help again. I have an immutable static array and i 
need to initialize its contents inside a for loop. Something 
like this:


void f(int n)()
{
immutable float[n] my_array;
for(int i=0;i

 I'd probably separate the calculations into a separate function, 
and assign the immutable data all at once (which you have to do 
similarly with constructors). So... i think this would work...


void f(int n)()
{
  auto static calculate() {
float[n] tmp;
for(int i=0;i

Re: How to get the address of a static struct?

2017-07-09 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 10 July 2017 at 03:48:17 UTC, FoxyBrown wrote:

static struct S

auto s = &S; // ?!?!?! invalid because S is a struct, but...

basically s = S. So S.x = s.x and s.a = S.a;

Why do I have to do this?


 Static has a different meaning for struct. More or less it means 
it won't have access to a delegate/fat pointer to the function 
that uses it. It doesn't mean there's only 1 instantiation ever 
(unlike like the static variables). So static is a no-op in this 
case (though still syntactically legal to use).


 To get the address of the struct you STILL have to instantiate 
it first. Although you don't to in order to access it's static 
members.


 Though if all the members are static, it's basically a namespace 
and doing so is kinda pointless.


Re: CTFE output is kind of weired

2017-07-08 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 8 July 2017 at 21:29:10 UTC, Andre Pany wrote:

app.d(17):called from here: test("1234\x0a5678\x0a")

I wrote the source code on windows with a source file with \r\n 
file endings.
But the console output has only the character X0a. In addition 
not the content of tmp is shown but the full content with the 
slice information [4..10].


Is this the intended behavior?


 The escape sequence says it's hex, and 0a translates to 10, and 
0d is 13; \r\n is usually a 13,10 sequence. So from the looks of 
it the \r is getting stripped out.


 Funny story as i understand it, \r and \n both have very 
specific meanings to old printers in the old days. \r for 
carriage-Return, and \n for New-line. DOS may have used it, but 
\r more-or-less has no use for newlines and printers today.


 Curious when using writeln, all newlines seem to have \r 
sequences added. So i suppose it's one of those things that i 
never needed to think about really.


Re: Funny issue with casting double to ulong

2017-07-04 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 4 July 2017 at 12:32:26 UTC, Patrick Schluter wrote:
In times of lore, BCD floats were very common. The Sharp Pocket 
Computer used a BCD float format and writing machine code on 
them confronts one with the format. The TI-99/4A home computer 
also used a BCD float format in its Basic interpreter. It had 
the same properties as the float format of the TI calculators, 
I.e. 10 visible significant digits (+ 3 hidden digits) and 
exponents going from -99 to +99.


If you look at the instruction set for 6502 (and probably similar 
4-8bit CPU's) they literally don't deal with anything other than 
8bit add/subtraction & other basic binary operators. Without 
multiplication or division all of that has to be simulated in 
software. And only needing 10 instructions to do just about 
everything, well...


BCD of course has a big advantage built-in: Because it's all 
base10, converting BCD to a string and printing it is very fast 
(as well as precise). And with 1 byte reserved for exponent, 
raising/lowering the level is also very very easy and goes a very 
large range.


There's also a series of algorithms for calculating some of the 
more complex functions using small tables or an iteration of 
shift & add which is implemented on calculators allowing a 
simpler (and reduced) instruction set or fewer transistors to 
make calculators work (CORDIC). It's actually pretty fascinating 
to read about.


BCD could still be an option though... I could probably write 
one; Although with doubles avaliable, you probably don't need it.


Re: Funny issue with casting double to ulong

2017-07-03 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 3 July 2017 at 06:20:22 UTC, H. S. Teoh wrote:
On Mon, Jul 03, 2017 at 05:38:56AM +, Era Scarecrow via 
Digitalmars-d-learn wrote:
I almost wonder if a BCD, fixed length or alternative for 
floating point should be an option...


From what I've heard, word on the street is to avoid using 
floating-point for money calculations, and use fixed-point 
arithmetic instead (I.e., basically ints / longs, with a 
built-in decimal point in a fixed position).  Inexact 
representations of certain fractions of tens like the above are 
one reason for this.


I don't think there's a way to change how the FPU works -- the 
hardware is coded that way and can't be changed.  You'd have to 
build your own library or use an existing one for this purpose.


 It's been a while, i do recall there was BCD options, actually 
found a few of the instructions; However they are more on 
loading/storing the value, not on working strictly in that mode. 
Last i remember seeing references to BCD work was in 2000 or so.


 I'll have to look further before i find (or fail to find) all 
that's BCD related. Still if it IS avaliable, it would be an x87 
only option and thus wouldn't be portable unless the language or 
a library offered support.


Re: Funny issue with casting double to ulong

2017-07-02 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 3 July 2017 at 03:57:25 UTC, Basile B wrote:

6.251 has no perfect double representation. It's real value is:


 I almost wonder if a BCD, fixed length or alternative for 
floating point should be an option... Either library, or a hook 
to change how the FPU works since doubles are suppose to do 16-18 
digits of perfect simple floatingpoint for the purposes of money 
and the like without relying on such imperfect transitions.


Re: Is D slow?

2017-06-09 Thread Era Scarecrow via Digitalmars-d-learn
On Friday, 9 June 2017 at 18:32:06 UTC, Steven Schveighoffer 
wrote:

Wow, so that's how D code would look like if it were C++ :)


 When dipping my toes into C++ to do a quicksort algorithm, I 
quickly got annoyed I'd have to create all the individual 
comparison functions rather than just one like in D... Which is 
one thing I'm seeing from the converted 'toy'. Actually I ended 
up making an opCmp and then overloading all the individual types 
to use the opCmp.


Re: .sort vs sort(): std.algorithm not up to the task?

2017-06-07 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 8 June 2017 at 02:19:15 UTC, Andrew Edwards wrote:
Pretty funny. But seriously, this is something that should just 
work. There is now to layers of indirection to achieve what I 
used to do quite naturally in the language.


 Hmmm while working on my recent sudoku solver using pointers to 
structs, i had opCmp defined, but it was still sorting by pointer 
address rather than how i told it to sort; I had to give it the 
hint of sort!"*a < *b" for it to work right.


 It does seem like a little more duct-tape on the wizard is 
needed in some cases. Thankfully it isn't too complex to know 
where to tape the wand into the wizard's hand.


Re: rawRead using a struct with variable leght

2017-06-07 Thread Era Scarecrow via Digitalmars-d-learn

On Wednesday, 7 June 2017 at 18:31:41 UTC, H. S. Teoh wrote:
"Structs" with variable size fields have no direct equivalent 
in D's type system, so you'll probably have a hard time mapping 
this directly.


What you *could* do, though, is to load the data into a ubyte[] 
buffer, then create a proxy struct containing arrays where you 
have variable-sized fields, with the arrays slicing the ubyte[] 
buffer appropriately.  Unfortunately, yes, this means you have 
to parse the fields individually in order to construct these 
slices.


 I'm reminded a little bit of how I ended up handling the records 
and subrecords for Morrowind files; I ended up creating a range 
type which recognized the different types and returned the 
records, then a second one that cycled through the sub records 
and generated the structs as it went.


 Although those were incredibly simple, it was 2 fields, the name 
of the field and then the length of the whole thing together for 
the record (char, int). For subrecords it was the same, except 
additional int and other string fields, all fixed length, no 
weird dynamic allocation required.


 Unless the arrays are stored/saved after the rest of the data, I 
don't see how you could bulk load the other fields so easily.


Re: rawRead using a struct with variable leght

2017-06-05 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 5 June 2017 at 16:04:28 UTC, ade90036 wrote:

Unfortunately the struct doesn't know at compile time what the 
size of the constant_pool array, or at-least was not able to 
specify it dynamically.


 It also won't know ahead of time how many fields, methods or 
attributes you have either.


 First I'd say all the arrays will have to be redefined to use 
[], rather than a fixed size.


 Glancing at the chapter information, you're probably not going 
to have an easy time, and will have to simply have to fill in the 
fields individually in order followed by allocating the arrays 
and probably filling/loading those immediately (although it's 
possible the array contents are done at the end, though it seems 
doubtful).





Re: Mixin in Inline Assembly

2017-06-02 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 1 June 2017 at 12:00:45 UTC, Era Scarecrow wrote:
 So why is the offset off by 14h (20 bytes)? It's not like we 
need a to set a ptr first.


 Go figure i probably found a bug...


 Well as a side note a simple yet not happy workaround is making 
a new array slice of the memory and then using that pointer 
directly. Looking at the intel opcode and memory call 
conventions, I could have used a very compact intel set and 
scaling. Instead I'm forced to ignore scaling, and I'm also 
forced to push/pop the flags to save the carry when advancing the 
two pointers in parallel. Plus there's 3 instructions that don't 
need to be there.


 Yeah this is probably nitpicking... I can't help wanting to be 
as optimized and small as possible.


Re: "Lazy" initialization of structs

2017-06-01 Thread Era Scarecrow via Digitalmars-d-learn
On Thursday, 1 June 2017 at 12:04:05 UTC, Daniel Tan Fook Hao 
wrote:
If I'm reading this right, in the former, the struct is created 
when the function is called in run-time, and the type is then 
inferred after that? I don't really understand the behavior 
behind this.


 The only difference between the two, is the inner struct can 
hold a delegate or a pointer to the function's local variables. 
If you make the first example 'static struct' then the two are 
100% identical (with the exception of visibility of who can 
see/initiate the struct).


 Although since there's no function calls from the struct I don't 
see how it should act any different, though that might not 
prevent it from throwing the pointer there anyways.


Re: Mixin in Inline Assembly

2017-06-01 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 23 May 2017 at 03:33:38 UTC, Era Scarecrow wrote:
From what I'm seeing, it should be 8, 0ch, 10h, then 14h, all 
positive. I'm really scratching my head why I'm having this 
issue...


What am i missing here?


More experiments and i think it comes down to static arrays.

The following function code

int[4] fun2() {
int[4] x = void;
asm {
mov dword ptr x, 100;
}
x[0] = 200; //get example of real offset
return x;
}

Produces the following (from obj2asm)

int[4] x.fun2() comdat
assume  CS:int[4] x.fun2()
enter   014h,0
mov -4[EBP],EAX
mov dword ptr -014h[EBP],064h
mov EAX,-4[EBP]
mov dword ptr [EAX],0C8h// x[0]=200, 
offset +0

mov EAX,-4[EBP]
leave
ret
int[4] x.fun2() ends


 So why is the offset off by 14h (20 bytes)? It's not like we 
need a to set a ptr first.


 Go figure i probably found a bug...


Re: purity question

2017-05-28 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 29 May 2017 at 01:12:53 UTC, Era Scarecrow wrote:

...


 Hmm didn't notice the post had split. Otherwise i wouldn't have 
replied... That and thinking about the GC state (outside of 
allocating memory)...


Re: purity question

2017-05-28 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 28 May 2017 at 23:49:16 UTC, Brad Roberts wrote:
// do something arbitrarily complex with s that doesn't 
touch globals or change global state except possibly state of 
the heap or gc


 Sounds like the basic definition of pure to me; At least in 
regards to D. Memory allocation which is a system call, doesn't 
actually break purity. Then again if you were worried about not 
using the gc, there's the newer nogc property.


[quote]
 TDPL pg. 165: 5.11.1 Pure functions

 In D, a function is considered pure if returning a result is 
it's only effect and the result depends only on the function's 
arguments.

[/quote]


Re: Sudoku Py / C++11 / D?

2017-05-25 Thread Era Scarecrow via Digitalmars-d-learn

On Wednesday, 15 August 2012 at 20:13:10 UTC, Era Scarecrow wrote:

On Wednesday, 15 August 2012 at 15:39:26 UTC, ixid wrote:
Could you supply your code? Which one are you using as the 
hardest? If you're solving the 1400 second one in 12 seconds 
that's very impressive, I can't get it below 240 seconds.


Expanded to 225 lines after comments and refactoring for names. 
I think it should be fairly easy to follow.


https://github.com/rtcvb32/D-Sudoku-Solver


 While an old thread, I decided to try a different approach to 
sudoku solving. In no way is this better, just a different 
approach. At 200 lines (needs some heavy unittests added, but 
appears to work).


 Using a sorting method to solve the puzzle. The idea is to take 
your puzzle, sort it by weight (how many possible numbers) and 
only take guesses with the smallest number of combinations 
possible, meaning any puzzle with 1 solution won't take long. The 
idea behind this method is to ignore combinations that might 
never come up; Afterall if you have a block with 2 possibilities, 
why start brute forcing the one with 7? Fewer wasted cycles. Yes 
it still uses brute force and built-in backtracking (and also 
outputs all combinations of a solution).


 Trying the REALLY REALLY hard one from before? (17 numbers) 
Well... I had it run in the background for a few hours, and got 
69,555,823 answers before the output (610Mb compressed, 11,067Mb 
uncompressed) simply filled up the free space on my ramdrive thus 
crashing the program.


Re: Sorted sequences

2017-05-25 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 25 May 2017 at 10:39:01 UTC, Russel Winder wrote:
C++ has std:priority_queue as a wrapper around a heap to create 
a sorted queue. Am I right in thinking that D has no direct 
equivalent, that you have to build you own wrapper around a 
heap?


 Do you even need a wrapper?

 Glancing at priority_queue it more or less ensures the largest 
element is always first...


 However glancing at the D documentation, you already get the 
same thing.


https://dlang.org/phobos/std_container_binaryheap.html
@property ElementType!Store front();
Returns a copy of the front of the heap, which is the 
largest element according to less.



 A quick test shows inserted items are both sorted and you get 
the largest element immediately. So honestly it sounds like it's 
already built in... no modification or wrapper needed, unless of 
course I'm missing something?


Re: Mixin in Inline Assembly

2017-05-22 Thread Era Scarecrow via Digitalmars-d-learn
On Wednesday, 11 January 2017 at 17:32:35 UTC, Era Scarecrow 
wrote:
 Still I think I'll impliment my own version and then if it's 
faster I'll submit it.



Decided I'd give my hand at writing a 'ScaledInt' which is 
intended to basically allow any larger unsigned type. Coming 
across some assembly confusion.


Using mixin with assembly here's the 'result' of the mixin (as a 
final result)


alias UCent = ScaledInt!(uint, 4);

struct ScaledInt(I, int Size)
if (isUnsigned!(I) && Size > 1) {
I[Size] val;

ScaledInt opBinary(string op)(const ScaledInt rhs) const
if (op == "+") {
ScaledInt t;
asm pure nothrow { //mixin generated from another 
function, for simplicity

mov EBX, this;
clc;
mov EAX, rhs[EBP+0];
adc EAX, val[EBX+0];
mov t[EBP+0], EAX;
mov EAX, rhs[EBP+4];
adc EAX, val[EBX+4];
mov t[EBP+4], EAX;
mov EAX, rhs[EBP+8];
adc EAX, val[EBX+8];
mov t[EBP+8], EAX;
mov EAX, rhs[EBP+12];
adc EAX, val[EBX+12];
mov t[EBP+12], EAX;
}

return t;
}
}



Raw disassembly for my asm code shows this:
mov EBX,-4[EBP]
clc
mov EAX,0Ch[EBP]
adc EAX,[EBX]
mov -014h[EBP],EAX
mov EAX,010h[EBP]
adc EAX,4[EBX]
mov -010h[EBP],EAX
mov EAX,014h[EBP]
adc EAX,8[EBX]
mov -0Ch[EBP],EAX
mov EAX,018h[EBP]
adc EAX,0Ch[EBX]
mov -8[EBP],EAX


From what I'm seeing, it should be 8, 0ch, 10h, then 14h, all 
positive. I'm really scratching my head why I'm having this 
issue... Doing an add of t[0] = val[0] + rhs[0]; i get this 
disassembly:


mov EDX,-4[EBP] //mov EDX, this;
mov EBX,[EDX]   //val[0]
add EBX,0Ch[EBP]//+ rhs.val[0]
mov ECX,8[EBP]  //mov ECX, ???[???]
mov [ECX],EBX   //t.val[0] =

If i do "mov ECX,t[EBP]", i get "mov ECX,-014h[EBP]". If i try to 
reference the exact variable val within t, it complains it 
doesn't know it at compiler-time (although it's a fixed location).


What am i missing here?


Re: No tempFile() in std.file

2017-05-15 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 15 May 2017 at 22:38:15 UTC, Jonathan M Davis wrote:
Personally, I think that it would be very much worth making 
hello world larger, since hello world really doesn't matter, 
but because there are plenty of folks checking out D who write 
hello world and then look at the executable size, it was 
considered unacceptable for it to get much larger.


I'm reminded of doing the same thing with C++ using streams and 
saw the size explode from 60k or so to something like 400k, for 
seemingly no good reason at all.


Hmmm while we're on the subject of size, is there a tool to strip 
out functions that are never used from the final executable?


Re: Porting Java code to D that uses << and >>> operators

2017-05-01 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 1 May 2017 at 21:04:15 UTC, bachmeier wrote:

On Monday, 1 May 2017 at 18:16:48 UTC, Era Scarecrow wrote:

 Reminds me... was the unsigned shift >>> ever fixed?


What was wrong with it?


Doing a broad test I'm seeing an issue with short & byte 
versions... Course that's probably due to the default upcasting 
to int rather than short/byte, while the >>>= works just fine. 
So...


byte f0 >> fff8
byte f0 >>> 7ff8
short f000 >> f800
short f000 >>> 7800




Re: Porting Java code to D that uses << and >>> operators

2017-05-01 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 1 May 2017 at 15:53:41 UTC, Basile B. wrote:
It's the same code in D. It extracts consecutive bits in x12 
and x13 (and maskxx), put them at the beginning (right shift) 
and add them.


 Reminds me... was the unsigned shift >>> ever fixed?


Re: Problems with Zlib - data error

2017-04-21 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 21 April 2017 at 17:40:03 UTC, Era Scarecrow wrote:
I think I'll just go with full memory compression and make a 
quick simple filter to manage the large blocks of 0's to 
something more manageable. That will reduce the memory 
allocation issues.


 Done and I'm happy with the results. After getting all my tests 
to work, working on the input of 660Mb went to 3.8Mb, and then 
compressing it with Zlib went to 2.98Mb.


 Alas the tool will be more useful in limited scope (rom hacking 
for example) than anywhere else probably... Although if there's 
any request for the source I can spruce it up before submitting 
it for public use.


Re: Problems with Zlib - data error

2017-04-21 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 21 April 2017 at 12:57:25 UTC, Adam D. Ruppe wrote:
But I didn't realize your thing was a literal example from the 
docs. Ugh, can't even trust that.


Which was a larger portion of why I was confused by it all than 
otherwise.


Still, it's much easier to salvage if I knew how the memory being 
returned was allocated or not, and if it could be de-allocated 
after I was done with it, vs letting the gc manage it. The black 
box vs white box approach.



Take a look at zlib.d's source

http://dpldocs.info/experimental-docs/source/std.zlib.d.html#L232

It isn't a long function, so if you take that you can 
copy/paste the C parts to get you started with your own 
function that manages the memory more efficiently to drop the 
parts you don't care about.


I've worked directly with Zlib API in the past; However it was 
namely to get it to work with AHK allowing me to instantly 
compress text and see it's UUEncode64 output (which was fun) as 
well as having multiple source references for better compression.




I think I'll just go with full memory compression and make a 
quick simple filter to manage the large blocks of 0's to 
something more manageable. That will reduce the memory allocation 
issues.


Re: Problems with Zlib - data error

2017-04-21 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 20 April 2017 at 20:24:15 UTC, Adam D. Ruppe wrote:
In short, byChunk reuses its buffer, and std.zlib holds on to 
the pointer. That combination leads to corrupted data.


Easiest fix is to .dup the chunk...


 So that's what's going on. But if I have to dup the blocks then 
I have the same problem as before with limited memory issues. I 
kinda wish more there was the gz_open that is in the C interface 
and let it deal with the decompression and memory management as 
appropriate.


I suppose i could incorporate a 8 byte header file that has the 
length before/after that are 0's and just drop 630Mb from the 
data that can be skipped... which is the bulk of the compressed 
data. I just hoped to keep it very simple.


Problems with Zlib - data error

2017-04-20 Thread Era Scarecrow via Digitalmars-d-learn
I took the UnCompress example and tried to make use of it, 
however it breaks midway through my program with nothing more 
than 'Data Error'.


[code]
//shamelessly taken for experimenting with
UnCompress decmp = new UnCompress;
foreach (chunk; stdin.byChunk(4096).map!(x => 
decmp.uncompress(x)))

[/code]

Although 2 things to note. First I'm using an xor block of data 
that's compressed (either with gzip or using only zlib), and 
second the size of the data is 660Mb, while the compressed gzip 
file is about 3Mb in size. So when the data gets out of the large 
null blocks is when it dies. The first 5Mb fits in 18k of 
compressed space (and could be re-compressed to save another 17%).


Is this a bug with zlib? With the Dlang library? Or is it a 
memory issue with allocation (which drove me to use this rather 
than the straight compress/decompress in the first place).


[code]
  File xor = File(args[2], "r"); //line 53

  foreach (chunk; xor.byChunk(2^^16).map!(x => cast(ubyte[]) 
decmp.uncompress(x))) //line 59 where it's breaking, doesn't 
matter if it's 4k, 8k, or 64k.

[/code]


std.zlib.ZlibException@std\zlib.d(96): data error

0x00407C62 in void std.zlib.UnCompress.error(int)
0x00405134 in ubyte[] 
xortool.main(immutable(char)[][]).__lambda2!(ubyte[]).__lambda2(ubyte[])
0x00405291 in @property ubyte[] 
std.algorithm.iteration.__T9MapResultS297xortool
4mainFAAyaZ9__lambda2TS3std5stdio4File7ByChunkZ.MapResult.front() 
at 
c:\D\dmd2\windows\bin\..\..\src\phobos\std\algorithm\iteration.d(582)

0x0040243F in _Dmain at g:\\Patch-Datei\xortool.d(59)
0x00405F43 in 
D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv
0x00405F07 in void rt.dmain2._d_run_main(int, char**, extern (C) 
int function(char[][])*).runAll()

0x00405E08 in _d_run_main
0x00405BF8 in main at g:\\Patch-Datei\xortool.d(7)
0x0044E281 in mainCRTStartup
0x764333CA in BaseThreadInitThunk
0x77899ED2 in RtlInitializeExceptionChain
0x77899EA5 in RtlInitializeExceptionChain


Re: Duplicated functions not reported?

2017-04-16 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 15 April 2017 at 11:10:01 UTC, Stefan Koch wrote:

It would requires an O(n^2) check per declaration.
Even it is never used.
which would make imports that much more expensive.


 Seems wrong to me...

 If you made a list/array of all the functions (based purely on 
signatures) then sorted them, then any duplicates would be 
adjacent. Scanning that list would be O(n-1).


 This assumes it's done after all functions are scanned and 
identified, doing it earlier is a waste of time and energy.


Re: Error: out of memory

2017-03-19 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 18 March 2017 at 20:39:20 UTC, StarGrazer wrote:
I have some CTFE's and meta programming that cause dmd to run 
out of memory ;/


I am generating simple classes, but a lot of them. dmd uses 
about 2GB before it quits. It also only uses about 12% of cpu.


 I've noticed heavy use of foreach and temporary variables will 
end up wiping out a lot of your memory using CTFE... At which 
point using a different for loop might be a better idea.


Re: Using chunks with Generator

2017-03-14 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 14 March 2017 at 10:00:24 UTC, Tetiana wrote:

Build fails with the following error:


 Just looking at the Documentation, Generator is an InputRange 
and chunks requires a ForwardRange (able to use save 
functionality).


 So the API doesn't match up more or less.

https://dlang.org/phobos/std_concurrency.html#.Generator
https://dlang.org/phobos/std_range.html#chunks


Re: simple static if / traits question...

2017-02-22 Thread Era Scarecrow via Digitalmars-d-learn

On Wednesday, 22 February 2017 at 21:27:47 UTC, WhatMeWorry wrote:



I'm doing conditional compilation using static ifs like so:

enum bool audio   = true;



// if audio flag is present and set to true, add to code build

static if ( (__traits(compiles, audio)) && audio)   

playSound(soundSys, BLEEP );


 I think you're thinking too deeply on this. During optimization 
branches of if statements that always evaluate to false are 
compiled out.


Re: How do I use CTFE to generate an immutable associative array at compile time?

2017-02-21 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 21 February 2017 at 22:34:57 UTC, Chad Joan wrote:
In this case the AA isn't actually coded into the executable; 
but at least the configuration from some_data.csv will be in 
the executable as a string.  The program will construct the AA 
at startup.  It's not as "cool", but it should get the job done.


 I have a partial static AA implementation that seems like it 
works, I mentioned this in a different thread.


https://github.com/rtcvb32/Side-Projects/blob/master/staticaa.d

Try it out, etc.

Usage:
Create your AA as an enum (for simplicity)

StaticAA!(KeyType, ValueType, getAALen(EnumAssosiativeArray), 
EnumAssosiativeArray.length)(EnumAssosiativeArray);


Afterwards use it as you normally would for the same thing.

Unittest example:

enum AA = ["one":1, "two":2, "three":3, "four":4, "five":5, 
"six":6, "seven":7, "eight":8, "nine":9, "zero":0];

auto SAA = StaticAA!(string, int, getAALen(AA), AA.length)(AA);

  //just verifies the keys/values match.
  foreach(k, v; AA) {
assert(SAA[k] == v);
  }


Note: getAALen basically tests the array expanding it out until 
none of the hashes overlap or causes problems. Trying to compile 
these into a single call I've had issues, so if anyone has a 
better solution I'd go for it.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-12 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 13 February 2017 at 00:56:37 UTC, Nestor wrote:
On Sunday, 12 February 2017 at 05:54:34 UTC, Era Scarecrow 
wrote:

Ran some more tests.


Wow!
Thanks for the interest and effort.


 Certainly. But the bulk of the answer comes down that the 2 
levels that I've already provided are the fastest you're probably 
going to get. Certainly we can test using shorts or bytes 
instead, but it's likely the results will only go down.


 To note my tests are strictly on my x86 system and it would be 
better to also test this on other systems like PPC, Linux, ARM, 
and other architectures to see how they perform, and possibly 
tweak them as appropriate.


 Still we did find out there is some optimization that can be 
done and successfully for the Damm algorithm, it just isn't going 
to be a lot.


 Hmmm... A thought does come to mind. Parallelizing the code; 
However that would require probably 11 instances to get a 2x 
speedup (calculating the second half with all 10 possibilities 
for the carry over, and also calculating the first half, then 
choosing which of the 10 based on the first half's output), which 
only really works if you have a ton of cores, and the input is 
REALLY REALLY large, like a meg or something. While the usage of 
the Damm code is more useful for adding a digit to the end of a 
code like UPC or Barcodes as error detection, and expecting 
larger than 32 for real applications is unlikely.


 But at this point I'm rambling.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-11 Thread Era Scarecrow via Digitalmars-d-learn
On Saturday, 11 February 2017 at 21:56:54 UTC, Era Scarecrow 
wrote:
 Just ran the unittests under the dmd profiler, says the 
algorithm is 11% faster now. So yeah slightly more optimized.


Ran some more tests.

Without optimization but with with 4 levels (a 2.5Mb table), it 
gains to a whopping 27%!
However with optimizations turned on it dwindled to a mere 15% 
boost
And optimization + no bounds checking, 2 & 4 levels both give a 
9% boost total.


Testing purely on 8byte inputs (Brute forced all combinations) 
receives the same 9% boost with negligible difference.


Safe to say going higher levels isn't going to give you 
sufficient improvement; Also exe file is 3Mb big (but compresses 
to 150k).


Re: Can this implementation of Damm algorithm be optimized?

2017-02-11 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 12 February 2017 at 00:43:55 UTC, Nestor wrote:
I fail to see where you are declaring QG10Matrix2, because 
apparently it's an array of chars, but buildMatrix2 returns an 
array of int (2560 elements??) with lots of -1 values.


I declared it here: 
http://forum.dlang.org/post/fiamjuhiddbzwvapl...@forum.dlang.org


and it's not chars, it's ints. Part of it is to avoid the mem 
access penalty for non divisible by 4 addresses, and another is 
that the array returns not chars, but numbers from 0-2560, which 
is the answer *256 (<<8), which offers hopefully some slight 
speed gain on longer inputs.


 Also the size of the array is guesstimated, and the -1's just 
signify the padding (which can be any value, but -1 makes it 
obvious). Since it's a 10x10 array it's based on, but using 4bits 
per section, there's 6 elements lost, and 6 whole rows lost. It's 
a necessary loss to gain speed; Thankfully it's only using 10k 
(2560 members) and not 700k as was my original guess when i was 
calculating it wrong earlier.


 Doing it wrong earlier, the compiler kept crashing... :P running 
out of memory.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-11 Thread Era Scarecrow via Digitalmars-d-learn
On Saturday, 11 February 2017 at 21:41:11 UTC, Era Scarecrow 
wrote:
 But it seriously is a lot of overhead for such a simple 
function.


 Just ran the unittests under the dmd profiler, says the 
algorithm is 11% faster now. So yeah slightly more optimized. 
Another level and we could probably get 25%, but the built matrix 
will blow up far larger than the 10k it is now.


  Num  TreeFuncPer
  CallsTimeTimeCall
1200 1281989 1281989   0 char 
damm.checkDigit(immutable(char)[])
1200 1146308 1146308   0 char 
damm.checkDigit2(immutable(char)[])





Re: Can this implementation of Damm algorithm be optimized?

2017-02-11 Thread Era Scarecrow via Digitalmars-d-learn
On Saturday, 11 February 2017 at 21:02:40 UTC, Era Scarecrow 
wrote:
 Yes i know, which is why i had 3 to calculate 2 inputs because 
the third is the temp/previous calculation.


 Alright I've found the bug and fixed it, and it passes with 
flying colors (brute force tests up to 6 digits); However it 
doesn't use the original function to build the table. So I'm 
satisfied it will handle any length now.


 But it seriously is a lot of overhead for such a simple function.

int[] buildMatrix2() {
string digits = "0123456789";
int[] l = new int[16*16*10];
l[] = -1; //printing the array it's obvious to see what is 
padding

foreach(a; digits)
foreach(b; digits)
foreach(c; digits) {
int t = (a-'0')*10,
t2 = (QG10Matrix[(b - '0') + t]-'0') * 10,
off = (a - '0') << 8 | (b - '0') << 4 | (c - '0');
l[off] = (QG10Matrix[(c - '0') + t2]-'0')<<8;
}

return l;
}

char checkDigit2(string str) {
int tmpdigit = 0;
for(;str.length >= 2;str = str[2 .. $])
tmpdigit = 
QG10Matrix2[tmpdigit|(str[0]-'0')<<4|(str[1]-'0')];


tmpdigit>>=8;
if (str.length==1)
return QG10Matrix[(str[0]-'0')+tmpdigit*10];

return (tmpdigit+'0') & 0xff;
}




Re: Can this implementation of Damm algorithm be optimized?

2017-02-11 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 11 February 2017 at 20:19:51 UTC, Nestor wrote:
Notice this is no ordinary matrix, but an Anti-Simmetric 
QuasiGroup of order 10, and tmpdigit (called interim in the 
algorithm) is used in each round (although the function isn't 
recursive) together with each digit to calculate final check 
digit.


 Yes i know, which is why i had 3 to calculate 2 inputs because 
the third is the temp/previous calculation.


 If however you were calculating a fixed number of digits a 
single table could be made and do a single lookup, assuming it 
wasn't too large to make it uncumbersome or impractical.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-11 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 10 February 2017 at 11:27:02 UTC, Nestor wrote:
Thank you for the detailed reply. I wasn't able to follow you 
regarding the multilevel stuff though :(


 The idea behind it is like this (which you can scale up):

static immutable int[] QG10Matrix2 = buildMatrix2();

int[] buildMatrix2() {
string digits = "0123456789";
int[] l = new int[16*16*10];
char[3] s;
foreach(a; digits)
foreach(b; digits)
foreach(c; digits) {
s[] = [a,b,c];
l[(a-'0')<< 8|(b-'0')<<4|(c-'0')]=checkDigit(cast(string) 
s) - '0';

}

return l;
}


Using that it SHOULD allow you to get the result of 2 inputs 
simply by using 2 characters (plus the old result)


char checkDigit2(string str) {
int tmpdigit = 0;
for(;str.length >= 2;str=str[2 .. $]) {
tmpdigit = QG10Matrix2[tmpdigit<<8|(str[0]-'0')<< 
4|(str[1]-'0')];

}
   // handle remainder single character and return value


 While it should be easy, I'm having issues trying to get the 
proper results via unittests and I'm not sure why. Probably 
something incredibly simple on my part.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-09 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 9 February 2017 at 19:39:49 UTC, Nestor wrote:
OK I changed the approach using a multidimensional array for 
the matrix so I could ditch arithmetic operations altogether, 
but curiously after measuring a few thousand runs of both 
implementations through avgtime, I see no noticeable 
difference. Why?


 Truthfully, because you'll need tens of millions or hundreds of 
millions in length to determine if it makes a difference and how 
much. Addition, subtraction and simple memory lookups take very 
little time, and since the entire array (100 bytes) fits in the 
cache, it is going to perform very very very well regardless if 
you can optimize it further.


 If you tested this on a much slower system, say an 8bit 6502 the 
differences would be far more pronounced, but not that much 
different.


 Since the algorithm is more or less O(n) optimizing it won't 
make many differences.


 It's possible you could get a speedup by making them ints 
instead of chars, since then it might not have a penalty for the 
'address not divisible by 4' that applies (which is more for ARM 
architectures and less for x86).


 Other optimizations could be to make it multiple levels, taking 
the basic 100 elements and expanding them 2-3 levels deep in a 
lookup and having it do it in more or less a single operation. 
(100 bytes for 1 level, 10,000 for 2 levels, 1,000,000 for 3 
levels, 100,000,000 for 4 levels, etc), but the steps of 
converting it to the array lookup won't give you that much gain, 
although fewer memory lookups but none of them will be cached, so 
any advantage from that is probably lost. Although if you bump up 
the size to 16x10 instead of 10x10, you could use a shift instead 
of *10 which will make that slightly faster (there will be unused 
empty padded spots)


 In theory if you avoid the memory lookup at all, you could gain 
some amount of speed, depending on how it searches a manual 
table, although using a switch-case and a mixin to do all the 
details it feels like it wouldn't give you any gain...


 Division operations are the slowest operations you can do, but 
otherwise most instructions run really fast. Unless you're trying 
to get it smaller (fewer bytes for the call) or shaving for speed 
by instruction cycle counting (like on the 6502) i doubt you'll 
get much benefit.


Re: Can this implementation of Damm algorithm be optimized?

2017-02-09 Thread Era Scarecrow via Digitalmars-d-learn

On Thursday, 9 February 2017 at 17:36:11 UTC, Nestor wrote:
I was trying to port C code from the article in Wikiversity [1] 
to D, but I'm not sure this implementation is the most 
efficient way to do it in D, so suggestions to optimize it are 
welcome:


import std.stdio;

static immutable char[] QG10Matrix =
  "03175986427092154863420687135917509834266123045978" ~
  "36742095815869720134894536201794386172052581436790";

char checkDigit(string str) {
  char tmpdigit = '0';
  foreach(chr; str) tmpdigit = QG10Matrix[(chr - '0') + 
(tmpdigit - '0') * 10];

  return tmpdigit;
}


Well one thing is you can probably reduce them from chars to just 
bytes, instead of having to subtract you can instead add at the 
end. Although unless you're working with a VERY large input you 
won't see a difference.


Actually since you're also multiplying by 10, you can incorporate 
that in the table too... (although a mixin might be better for 
the conversion than by hand)



 static immutable char[] QG10Matrix = [
00,30,10,70,50,90,80,60,40,20,
70,00,90,20,10,50,40,80,60,30,
40,20,00,60,80,70,10,30,50,90,
10,70,50,00,90,80,30,40,20,60,
60,10,20,30,00,40,50,90,70,80,
30,60,70,40,20,00,90,50,80,10,
50,80,60,90,70,20,00,10,30,40,
80,90,40,50,30,60,20,00,10,70,
90,40,30,80,60,10,70,20,00,50,
20,50,80,10,40,30,60,70,90,00];

 char checkDigit(string str) {
   char tmpdigit = 0;
   foreach(chr; str) tmpdigit = QG10Matrix[(chr - '0') +
 tmpdigit];
   return (tmpdigit/10) + '0';
 }


Re: Is there anything fundamentally wrong with this code?

2017-02-04 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 3 February 2017 at 18:37:15 UTC, Johan Engelen wrote:
The error is in this line. Instead of assigning to the 
`postProc` at module scope, you are defining a new local 
variable and assigning to it.


 Wasn't the compiler suppose to warn you when you are shadowing 
another variable? Or is that only with two local ones?


Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-27 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 27 January 2017 at 07:02:52 UTC, Jack Applegame wrote:

On Monday, 16 January 2017 at 14:47:23 UTC, Era Scarecrow wrote:
static char[1024*4] buffer;  //4k reusable buffer, NOT 
thread safe


Maybe I'm wrong, but I think it's thread safe. Because static 
mutable non-shared variables are stored in TLS.


 Perhaps, but fibers or other instances of sharing the buffer 
wouldn't be safe/reliable, at least not for long.


Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-26 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 17 January 2017 at 11:40:15 UTC, Nestor wrote:
Thanks, but unfortunately this function does not produce proper 
UTF8 strings, as a matter of fact the output even starts with 
the BOM. Also it doesn't handle CRLF, and even for LF 
terminated lines it doesn't seem to work for lines other than 
the first.


 I thought you wanted to get line by line of contents, which 
would then remain as UTF-16. Translating between the two types 
shouldn't be hard, probably to!string or a foreach with appending 
to code-units on chars would convert to UTF-8.


 Skipping the BOM is just a matter of skipping the first two 
bytes identifying it...


I guess I have to code encoding detection, buffered read, and 
transcoding by hand, the only problem is that the result could 
be sub-optimal, which is why I was looking for a built-in 
solution.


 Maybe. Honestly I'm not nearly as familiar with the library or 
functions as I would love to be, so often home-made solutions 
seem more prevalent until I learn the lingo. A disadvantage of 
being self taught.


Re: Referring to array element by descriptive name

2017-01-16 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 16 January 2017 at 19:03:17 UTC, albert-j wrote:
Thank you for all your answers. I was concerned because I'm 
dealing with a small function that is called many times and 
where the bulk of the calculations in the simulation takes 
place. So even 5% performance difference would be significant 
for me. But it is good to know that compilers are smart enough 
to optimize this.


 A while ago I had to deal with that fact, that the optimizations 
that it does over several levels is often better than my own. 
Using shifts which obfuscates that I was actually doing a divide. 
I tried writing a unique array handler to shave a few operations 
and save time, only to get no real benefit from it.


Re: Parsing a UTF-16LE file line by line, BUG?

2017-01-16 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 15 January 2017 at 19:48:04 UTC, Nestor wrote:

I see. So correcting my original doubt:

How could I parse an UTF16LE file line by line (producing a 
proper string in each iteration) without loading the entire 
file into memory?


Could... roll your own? Although if you wanted it to be UTF-8 
output instead would require a second pass or better yet changing 
how the i iterated.


char[] getLine16LE(File inp = stdin) {
static char[1024*4] buffer;  //4k reusable buffer, NOT thread 
safe

int i;
while(inp.rawRead(buffer[i .. i+2]) != null) {
if (buffer[i] == '\n')
break;

i+=2;
}

return buffer[0 .. i];
}


Re: writeln and ~

2017-01-14 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 14 January 2017 at 17:42:05 UTC, Ignacious wrote:
Why can't string concatenation automatically try to convert the 
arguments? Is there any reason this is bad behavior?


 Somehow I think that everything implicitly converting to a 
string seems like a bad idea.


 Although writefln and writeln using ,'s seems like better ideas 
than some minor JavaScript convenience.


Re: Referring to array element by descriptive name

2017-01-14 Thread Era Scarecrow via Digitalmars-d-learn

On Saturday, 14 January 2017 at 15:11:40 UTC, albert-j wrote:
Is it possible to refer to an array element by a descriptive 
name, just for code clarity, without performance overhead? E.g.


void aFunction(double[] arr) {
double importantElement = arr[3];
... use importantElement ...
}

But the above, I suppose, introduces an extra copy operation?


 Is the array always a fixed size? Or what?

 I wonder since you might get away with a union, or a struct that 
simply redirects the information appropriately. However it's a 
lot of writing for very little benefit at all.


 But honestly for as little loss you'll get of copying the one 
element and then copying it back (maybe if you change it) I doubt 
it will mean much if you just ignore trying to do a 0-cost 
aliasing as you are trying to do. You'd have to be doing it 
millions of times for such a copy to be noticeable.


Re: Merging two arrays in a uniform order

2017-01-13 Thread Era Scarecrow via Digitalmars-d-learn

On Friday, 13 January 2017 at 19:47:38 UTC, aberba wrote:

awesome. roundRobin? :)


https://dlang.org/phobos/std_range.html#.roundRobin

[quote]
roundRobin(r1, r2, r3) yields r1.front, then r2.front, then 
r3.front, after which it pops off one element from each and 
continues again from r1. For example, if two ranges are involved, 
it alternately yields elements off the two ranges. roundRobin 
stops after it has consumed all ranges (skipping over the ones 
that finish early).


roundRobin can be used to create "interleave" functionality which 
inserts an element between each element in a range.

[/quote]


Re: Mixin in Inline Assembly

2017-01-11 Thread Era Scarecrow via Digitalmars-d-learn
On Wednesday, 11 January 2017 at 15:39:49 UTC, Guillaume Piolat 
wrote:
On Wednesday, 11 January 2017 at 06:14:35 UTC, Era Scarecrow 
wrote:


Suddenly reminds me some of the speedup assembly I was writing 
for wideint, but seems I lost my code. too bad, the 128bit 
multiply had sped up and the division needed some work.


I'm a taker if you have some algorithm to reuse 32-bit divide 
in wideint division instead of scanning bits :)


 I remember the divide was giving me some trouble. The idea was 
to try and use the built in registers and limits of the assembly 
to take advantage of full 128bit division, unfortunately if the 
result is too large to fit in a 64bit result it breaks, rather 
than giving me half the result and letting me work with it.


 Still I think I'll impliment my own version and then if it's 
faster I'll submit it.


Re: Mixin in Inline Assembly

2017-01-10 Thread Era Scarecrow via Digitalmars-d-learn

On Tuesday, 10 January 2017 at 10:41:54 UTC, Basile B. wrote:

don't forget to flag

asm pure nothrow {}

otherwise it's slow.


Suddenly reminds me some of the speedup assembly I was writing 
for wideint, but seems I lost my code. too bad, the 128bit 
multiply had sped up and the division needed some work.


Re: Getch() Problem: C vs D

2017-01-09 Thread Era Scarecrow via Digitalmars-d-learn

On Monday, 9 January 2017 at 20:12:38 UTC, Adam D. Ruppe wrote:
Probably a bug, though I don't like using the getch function, I 
usually use the full input stream.


 For direct interactions (a game menu or similar) getting 
individual characters makes sense; I can't help but think 
Rogue-likes. However for data input (per line basis) or doing 
bulk data/processing, it doesn't work well.


 Something to comment on, a while back when I was first getting 
into C and MS-DOS assembly programming, I did a direct file-copy 
using only one character input/write at a time. A meg sized file 
probably took a minute or so while if I did something as small as 
a 4k buffer it took moments (approx 8000x faster). This was back 
in 1996 or so, still it's obvious the advantages of working in 
bulk.


Re: Getch() Problem: C vs D

2017-01-08 Thread Era Scarecrow via Digitalmars-d-learn

On Sunday, 8 January 2017 at 21:19:15 UTC, LouisHK wrote:
And works fine, but the D version below nothing happens when I 
hit ESCAPE:


Is this a bug or there is another approach?


Could this be because of maybe somehow it handles the console 
input? Kinda like how shift and different keys are toggles rather 
than dedicated to specific codes?


 Regardless, try ^[ ( Ctrl+[ ), which is 27 and ^] is 29.


  1   2   3   >