Re: Managing malloced memory

2021-10-12 Thread anon via Digitalmars-d-learn
On Wednesday, 6 October 2021 at 18:29:34 UTC, Steven 
Schveighoffer wrote:

```d
struct GCWrapped(T)
{
   private T *_val;
   this(T* val) { _val = val; }
   ref T get() { return *_val; }
   alias get this; // automatically unwrap
   ~this() { free(_val); _val = null; }
   @disable this(this); // disable copying to avoid double-free
}

GCWrapped!T *wrap(T)(T *item) {
  return new GCWrapped!T(item);
}

// usage
auto wrapped = wrap(cFunction());

// use wrapped wherever you need to access a T.
```

RE: @disable this(this);
I noticed that std.typecons.RefCounted only works on structs if 
you set this line. How is that? Is RefCounted catching an 
exception and working around it, or does the compiler treat 
strcuts like GCWrapped with postblit disabled differently and use 
other operations for them automatically, when it would otherwise 
had copied it. My guess: OpAssign gets converted to a move 
constructor automatically


Re: Managing malloced memory

2021-10-11 Thread anon via Digitalmars-d-learn
On Thursday, 7 October 2021 at 11:55:35 UTC, Steven Schveighoffer 
wrote:
The GC is technically not required to free any blocks ever. But 
in general, it does.


When it does free a struct, as long as you allocated with 
`new`, it should call the dtor.


In practice when I played around with it, destructor always got 
called by GC. But: https://dlang.org/spec/class.html#destructors 
says at point 6:
The garbage collector is not guaranteed to run the destructor 
for all unreferenced objects.
Is it the same for structs or are these destructors guaranteed to 
be called? Would it be suitable to clean up tempfiles with 
GC-managed structs?


Just FYI, you should reply to the posts that you quote, or at 
least copy the "X Y wrote" line so people understand the thread.


Alright. If I want to reply to multiple people, should I post 
twice or quote both in the same post?


The destructor is called once per copy. This is why disabling 
copy prevents double freeing.


There are cases where the compiler avoids calling the 
destructor because the instance is moved. Such as returning a 
newly constructed item (typically referred to as an "rvalue"), 
or passing a newly constructed item into a parameter. The 
parameter will be destroyed, but the call-site constructed item 
will not.


e.g.:

```d
struct S
{
   int x;
   ~this() { writeln("destructor called"); }
}

void foo(S s) {

   // destructor for s called here
}

S makeS(int x)
{
   return S(x); // no destructor called here.
}

void main()
{
   foo(S(1)); // no destructor called for this rvalue
   auto s = makeS(1);
   // destructor for s called here.
   foo(makeS(1)); // only one destructor called at the end of 
foo

}
```
Is there any reference for exactly how these rules apply, or is 
this implementation defined? The 
[specification](https://dlang.org/spec/struct.html#struct-destructor) says that destructors are called when objects go out of scope. Your examples seem to suggest that this is untrue in some cases.


Re: Managing malloced memory

2021-10-06 Thread anon via Digitalmars-d-learn
I found https://dlang.org/library/std/typecons/unique.html , 
which I think solves my problem by disabling copying. Thanks for 
the help.


Re: Managing malloced memory

2021-10-06 Thread anon via Digitalmars-d-learn

Sorry for messed up post, fixed it.

On Wednesday, 6 October 2021 at 18:29:34 UTC, Steven 
Schveighoffer wrote:
You can return this thing and pass it around, and the GC will 
keep it alive until it's not needed. Then on collection, the 
value is freed.


Is the gc required to call ~this() on the struct? I remember it 
being implementation defined. Probably doesn't matter for my 
usecase, just curious.



Why is it a problem that it calls the dtor? I thought the whole 
point of refcounting is for the dtor to decrement the refcount, 
and free the malloc'd object only when the refcount has actually 
reached 0.


Yes I'm afraid of double freeing. How do I pass existing struct 
to refcounted without the already existing copy calling 
destructed on function exit.




Re: Managing malloced memory

2021-10-06 Thread anon via Digitalmars-d-learn

Thanks for the help.

On Wednesday, 6 October 2021 at 18:29:34 UTC, Steven 
Schveighoffer wrote:
You can return this thing and pass it around, and the GC will 
keep it alive until it's not needed. Then on collection, the 
value is freed.
Is the gc required to call ~this() on the struct? I remember it 
being implementation defined. Probably doesn't matter for my 
usecase, just curious.


Why is it a problem that it calls the dtor?  I thought the whole 
point of refcounting is for the dtor to decrement the refcount, 
and free the malloc'd object only when the refcount has actually 
reached 0.

Yes I'm afraid of double freeing.


Managing malloced memory

2021-10-06 Thread anon via Digitalmars-d-learn
I interface to a C library that gives me a malloced object. How 
can I manage that pointer so that it gets freed automatically.

What I've thought of so far:
* scope(exit): not an option because I want to return that memory
* struct wrapper: Doesn't work because if I pass it to another 
function, they also destroy it (sometimes). Also same problem as 
with scope(exit)
* struct wrapped in automem/ refcounted: The struct still leaves 
original scope and calls the destructor


Re: Idiomatic way to write a range that tracks how much it consumes

2020-04-26 Thread anon via Digitalmars-d-learn
To implement your option A you could simply use 
std.range.enumerate.


Would something like this work?

import std.algorithm.iteration : map;
import std.algorithm.searching : until;
import std.range : tee;

size_t bytesConsumed;
auto result = input.map!(a => a.yourTransformation )
   .until!(stringTerminator)
   .tee!(a => bytesConsumed++);
// bytesConsumed is automatically updated as result is consumed


How can D Forum load so fast?

2017-03-09 Thread Anon via Digitalmars-d
The whole webpage https://forum.dlang.org/ has only 300KB in 
size. It not only supports mobile devices, but also loads much 
faster than general modern web pages.


How can they achieve such result?


Re: Need a Faster Compressor

2016-05-24 Thread Anon via Digitalmars-d

On Tuesday, 24 May 2016 at 16:22:56 UTC, Timon Gehr wrote:
It's _exponential_ growth. We don't even want to spend the time 
and memory required to generate the strings.


The reason we have this discussion is that the worst case isn't 
rare enough to make this argument. Why compress in the first 
place if mangled names don't grow large in practice?


Since Walter is evidently still having a hard time understanding 
this, I've done a few more pathological cases, comparing LZ4 to 
Back Referencing Named Types.


For BRNT I manually replaced struct names in the mangling with 
N numeric identifiers for all but one of their 
appearances, to simulate what could actually be done by the 
compiler. No other mangling changes (e.g., for eponymous 
templates or hidden types) were applied.


auto foo(T)(T x)
{
static struct Vold { T payload; }
return Vold(x);
}


At a chain of length 10:
foo(5).foo.foo.foo.foo.foo.foo.foo.foo.foo

current: Generate a 57_581 byte symbol
lz4 -9: Generate a 57_581 byte symbol, then compress it to 412 
bytes
lzma -9: Generate a 57_581 byte symbol, then compress it to 239 
bytes

BRNT: Generate a 332 byte symbol


Upping it to twelve levels:
foo(5).foo.foo.foo.foo.foo.foo.foo.foo.foo.foo.foo

current: Generate a 230_489 byte symbol
lz4 -9: Generate a 230_489 byte symbol, then compress it to 1128 
bytes
lzma -9: Generate a 230_489 byte symbol, then compress it to 294 
bytes

BRNT: Generate a 393 byte symbol

Upping it to fourteen levels:

I'm too lazy to do more manual BRNT, so beyond this point its 
numbers are estimated based on the previously established fact 
that BRNT symbols grow linearly.


current: Generate a 922_127 byte symbol
lz4 -9: Generate a 922_127 byte symbol, then compress it to 4012 
bytes

lzma -9: Generate a 922_127 byte symbol, compress it to 422 bytes
BRNT: Generate a ~457 byte symbol

Upping it to sixteen levels:

current: Generate a 3_688_679 byte symbol
lz4 -9: Generate a 3_688_679 byte symbol, then compress it to 
15535 bytes
lzma -9: Generate a 3_688_679 byte symbol, then compress it to 
840 bytes

BRNT: Generate a ~527 byte symbol



I want to let that sink in: in the general case, BRNT beats even 
**LZMA**.




As if winning the compactness war while still being a symbol 
linkers won't have problems with wasn't enough, this approach 
also avoids generating giant symbols in the first place. 
Compression and/or hashing cannot do that. If D really wants to, 
it can compress symbols, but it *needs* to fix this problem 
properly *first*.


Re: DMD producing huge binaries

2016-05-22 Thread Anon via Digitalmars-d

On Sunday, 22 May 2016 at 21:40:07 UTC, Andrei Alexandrescu wrote:

On 05/21/2016 06:27 PM, Anon wrote:

On Saturday, 21 May 2016 at 20:50:56 UTC, Walter Bright wrote:

On 5/21/2016 1:49 PM, Walter Bright wrote:
We already have a compressor in the compiler source for 
compressing

names:

https://github.com/dlang/dmd/blob/master/src/backend/compress.c

A faster one would certainly be nice. Anyone game?


Note how well it does:

https://issues.dlang.org/show_bug.cgi?id=15831#c5


The underlying symbols are still growing exponentially. Nobody 
has the

RAM for that, regardless of what size the resultant binary is.

Compressing the symbol names is a bandage. The compiler needs 
a new kidney.


My understanding is that the encoding "auto" return in the 
mangling makes any exponential growth disappear. Is that the 
case? -- Andrei


No:

auto foo(T)(T x)
{
struct Vold { T payload; }
return Vold(x);
}

foo(5)
_D3mod10__T3fooTiZ3fooFNaNbNiNfiZS3mod10__T3fooTiZ3fooFiZ4Vold

foo(5).foo
_D3mod38__T3fooTS3mod10__T3fooTiZ3fooFiZ4VoldZ3fooFNaNbNiNfS3mod10__T3fooTiZ3fooFiZ4VoldZS3mod38__T3fooTS3mod10__T3fooTiZ3fooFiZ4VoldZ3fooFS3mod10__T3fooTiZ3fooFiZ4VoldZ4Vold

foo(5).foo.foo
_D3mod94__T3fooTS3mod38__T3fooTS3mod10__T3fooTiZ3fooFiZ4VoldZ3fooFS3mod10__T3fooTiZ3fooFiZ4VoldZ4VoldZ3fooFNaNbNiNfS3mod38__T3fooTS3mod10__T3fooTiZ3fooFiZ4VoldZ3fooFS3mod10__T3fooTiZ3fooFiZ4VoldZ4VoldZS3mod94__T3fooTS3mod38__T3fooTS3mod10__T3fooTiZ3fooFiZ4VoldZ3fooFS3mod10__T3fooTiZ3fooFiZ4VoldZ4VoldZ3fooFS3mod38__T3fooTS3mod10__T3fooTiZ3fooFiZ4VoldZ3fooFS3mod10__T3fooTiZ3fooFiZ4VoldZ4VoldZ4Vold

Lengths: 62 | 174 | 398

Just dropping the return types to a single character ($) shrinks 
the names, but it doesn't solve the core of the problem. Still 
exponential:


foo(5)
_D3mod1010__T3fooTiZ3fooFNaNbNiNfiZ($)

foo(5).foo
_D3mod38__T3fooT(S3mod10__T3fooTiZ3fooFiZ4Vold)Z3fooFNaNbNiNf(S3mod10__T3fooTiZ3fooFiZ4Vold)Z{$}

foo(5).foo.foo
_D3mod94__T3fooT{S3mod38__T3fooT(S3mod10__T3fooTiZ3fooFiZ4Vold)Z3fooF(S3mod10__T3fooTiZ3fooFiZ4Vold)Z4Vold}Z3fooFNaNbNiNf{S3mod38__T3fooT(S3mod10__T3fooTiZ3fooFiZ4Vold)Z3fooF(S3mod10__T3fooTiZ3fooFiZ4Vold)Z4Vold}Z$

Lengths: 36 | 90 | 202

Note: the part inside () is the return type of the first. The 
part in {} is the return type of the second. I left those in for 
illustrative purposes.


Re: DMD producing huge binaries

2016-05-21 Thread Anon via Digitalmars-d

On Saturday, 21 May 2016 at 20:50:56 UTC, Walter Bright wrote:

On 5/21/2016 1:49 PM, Walter Bright wrote:
We already have a compressor in the compiler source for 
compressing names:


  
https://github.com/dlang/dmd/blob/master/src/backend/compress.c


A faster one would certainly be nice. Anyone game?


Note how well it does:

https://issues.dlang.org/show_bug.cgi?id=15831#c5


The underlying symbols are still growing exponentially. Nobody 
has the RAM for that, regardless of what size the resultant 
binary is.


Compressing the symbol names is a bandage. The compiler needs a 
new kidney.


Re: Casting Pointers?

2016-05-12 Thread Anon via Digitalmars-d

On Thursday, 12 May 2016 at 08:41:25 UTC, John Burton wrote:
I've been unable to find a clear definitive answer to this so 
please point me to one if it already exists in the manual or 
the forums.


Is it safe to cast pointer types?

double data;
long p = *cast(long*)

(Excuse any silly syntax errors as I'm doing this on my phone).

This used to be possible in C until people recently decided it 
not only was wrong, it had always been wrong :P (Ok I'm not 
entirely serious there but that's what this issue feels like...)


This is a Bad Idea in C because types in C aren't actually 
well-defined. `long` is required to be at least 32 bits. `double` 
is required to be a floating point type. They will often not 
match up in sizes the way you might think. The spec doesn't even 
require `double` to be IEEE (that's an optional annex).


Clang on Linux x86_64 has both `long` and `double` as 64-bit 
types.

Clang on Linux x86 has `long` as 32-bit, `double` as 64-bit.

Is this legal / valid in D and if not what is the appropriate 
way to efficiently access data like this?


In D, both `long` and `double` are defined in the spec to be 
64-bits, regardless of compiler/os/arch. It isn't "safe" (because 
casting pointers is never safe), but it should behave predictably.


Note that D does have its own poorly-defined types, such as 
`real`, that you will need to be careful with.




Re: CPU discussion post-DConf

2016-05-06 Thread Anon via Digitalmars-d

On Friday, 6 May 2016 at 20:32:59 UTC, Assi Zanafi wrote:

Quantum CPU before x86_128 ? what do you think ?
I really think that x86_64 is the last "classic architecture". 
Intel will never make x86 with 128 bits addresses. Qbytes will 
raise before.


I don't know which will come first, but QPUs will not obsolete 
CPUs and GPUs. QPUs are not just faster CPUs. In a very similar 
manner to how GPUs are better than CPUs for some (but not all) 
tasks, QPUs will be better for a number of problems, but many 
algorithms will not see any benefit from use of a QPU. Future 
computers will have a CPU, GPU, and QPU, each used in tandem for 
the things each is good at.


Re: Researcher question – what's the point of semicolons and curly braces?

2016-05-04 Thread Anon via Digitalmars-d

On Wednesday, 4 May 2016 at 15:46:13 UTC, Nick Sabalausky wrote:
It's touchy, because I've come across people who actually do 
genuinely believe the field has things in place deliberately to 
exclude women/ethnicities...even though...those VERY SAME 
people have never once been able to provide a SINGLE CONCRETE 
EXAMPLE of any of these alleged mechanisms they believe so 
strongly to exist.


Cognitive biases are a thing. People assume women are bad at 
math. People assume black people are violent thugs. People assume 
Asians are savant-level geniuses. People assume Native Americans 
are alcoholics. People assume Arabs are Muslims. People assume 
Muslims are terrorists. Those assumptions and biases dictate how 
we interact with the world. Sociology can describe systems and 
mechanisms that aren't controlled by people or even intentional. 
People do not even need to be aware of their biases. That doesn't 
make them not exist.


Not only that, but I've yet to come across an anti-minority or 
anti-female programmer, and even if such creatures exist, 
they're undeniably var too much in the minority to have any 
real large-scale effect on "keeping people out".


Anti- is irrelevant, as it is fairly easy to deal with. 
Perceptions and biases are what matter. As I said above, people 
(in general) assume women are bad at math. That makes them less 
likely to trust any math a female coworker does than they would 
be to trust the same math done by a male coworker. Women get 
tired of dealing with others disregarding them based on these 
assumptions, and feel unwelcome in the field. Nobody needs to be 
intentionally excluding them for them to be excluded. I know I 
wouldn't keep working somewhere if nobody took me seriously.


The vast majority that I've seen are far more likely to 
*dislike* the field's current gender imbalance.


In much the same way programming is predominantly male (or "a 
goddamn sausage-fest" as I see it), nursing is predominantly 
female. So why did none of US pursue careers in nursing? Was it 
because we hit roadblocks with systems in place within the 
nursing field designed to exclude males? Or was it because we 
just plain weren't interested and had the freedom to choose a 
field we DID have interest in instead?


Actual answer: boys are raised to view nursing as a girl's job, 
and taught that they should not pursue "feminine" jobs. The same 
is true of girls being taught not to pursue "masculine" jobs. 
This is changing, thankfully, but the outdated views are still 
pervasive.


Systems DO undeniably exist, for this very field, that are very 
plainly and deliberately sexist or racist though...but just not 
in the way some people believe. Unlike the others, I CAN 
provide a real concrete verifiable example: There are a lot of 
Computer Science grants and scholarships for students that list 
"female" or "non-caucasian" (or some specific non-caucasian 
race) as a mandatory requirement. I came across a bunch of 
those when I was trying to get financial aid for college. But 
there are NONE that require "male" or "caucasian" - it would 
never be permitted anyway, they'd get completely torn to shreds 
(and for good reason). The only ONE I did hear of was only a 
publicity stunt to point out the hypocrisy of all the sexist 
anti-male grants/scholarships.


And yet plenty of male-targeted outreach programs and 
scholarships do exist. In fields like nursing, for example. The 
point of such programs/scholarships is to create downward 
pressure on K-12 schooling to raise kids to not view *themselves* 
through these bias filters.


Verifiable fact: My sister paid considerably less than I did 
for each year of college even though we came from EXACTLY the 
same economic background, exactly the same city/town, exactly 
the same ethnicity, nearly the same age (and yet she's slightly 
younger, so if anything, increasing tuition rates would have 
worked AGAINST her), and one of our respective colleges was 
even the exact same school. And her pay now is (considerably) 
higher than mine, and she works in a field that's known to pay 
LESS than my field.


Not enough information in this anecdote. Your sister could have 
had higher grades than you, granting her more scholarship money. 
Need-based grants/scholarships take in to account the number of 
kids parents are supporting and how many are in university, and 
the kids' incomes (if any). She may have also applied herself 
more and gotten promoted more which could reverse the expected 
pay gap.


Anti-female systems in place? Bull fucking shit. Anyone who 
claims there are: put up REAL fucking examples instead of 
parroting vacuous rhetoric or shut the fuck up forever.


You are really starting to sound like the type of person who 
denies climate change because it was chilly in your city 
yesterday. Please don't be that person. Nobody likes that person. 
Nobody takes that person seriously.


I've had far more than enough of the mother fucking 

Re: Policy for exposing range structs

2016-03-31 Thread Anon via Digitalmars-d

On Thursday, 31 March 2016 at 20:40:03 UTC, Adam D. Ruppe wrote:
meh, if it is allowed, it is going to happen. Why make it worse 
when there's so little cost in making it better?


Define "little cost". Whatever compression algorithm chosen will 
need support added to any/all tools that want to demangle D. GDB 
and LLDB currently link to liblzma (on my system, at least. 
Configurations may vary). nm and objdump don't link to any 
compression lib. Good luck convincing binutils to add compression 
dependencies like that for D when they don't need them any other 
mangling schemes.


And no, ddemangle isn't a solution here, as then all those 
projects would need to be able to refer to it, and the best way 
for them to do that is to bundle it. Since ddemangle is written 
in D, that would mean binutils would suddenly depend on having a 
working D compiler. That won't happen in the next decade.


Also, any D code that uses core.demangle for any reason would 
suddenly depend on that compression library.


I'm not even fully convinced that my bootstring idea is low 
enough cost, and it's fairly simple, fast, and memory efficient 
compared to any compression algorithm.


I often don't actually modify the string at all and by putting 
the string as a template argument, it enables a kind of 
compile-time memoization like I talked about here a short while 
ago: http://stackoverflow.com/a/36251271/1457000


The string may be exceedingly if imported from a file or 
generated externally and cached:


MyType!(import("definition.txt")) foo;

enum foo = ctfeFunction();

MyType!foo test;


Just assigning to an enum gets you memoization (tested on LDC w/ 
DFE 2.68.2).

I don't see how the template factors into this.

Now, yes, if you call the function directly multiple times 
assigning to different enums, it won't memoize those. And it 
doesn't work lazily how the SO question asked for, but this does:


enum lazily(string name: "foo") = ctfeFunction();

If you don't refer to lazily!"foo", ctfeFunction() never gets 
called. If you do, it gets called once, regardless of how many 
times you use lazily!"foo".


That gives you lazy memoization of any CTFEable function without 
ever needing the function parameters to become template 
parameters.


I'm not sure what MyType is, but that looks like a prime 
candidate for my previous post's mixin examples. If not, you 
could use "definition.txt" as its parameter, and have it import() 
as an implementation detail.



$ is actually a valid identifier character in C


Nope. $ as an identifier character is a commonly supported 
extension, but code that uses it doesn't compile with `clang 
-std=c11 -Werror -pedantic`.


Re: Policy for exposing range structs

2016-03-31 Thread Anon via Digitalmars-d

On Thursday, 31 March 2016 at 17:52:43 UTC, Adam D. Ruppe wrote:
Yeah, but my thought is the typical use case isn't actually the 
problem - it is OK as it is. Longer strings are where it gets 
concerning to me.


Doubling the size of UTF-8 (the effect of the current base16 
encoding) bothers me regardless of string length. Especially when 
use of __MODULE__ and/or __FILE__ as template arguments seems to 
be fairly common.


Having thought about it a bit more, I am now of the opinion that 
super-long strings have no business being in template args, so we 
shouldn't cater to them.


The main case with longer strings going into template arguments 
I'm aware of is for when strings will be processed, then fed to 
`mixin()`. However, that compile time string processing is much 
better served with a CTFE-able function using a Rope for 
manipulating the string until it is finalized into a normal 
string. If you are doing compile-time string manipulation with 
templates, the big symbol is the least of your worries. The 
repeated allocation and reallocation will quickly make your code 
uncompilable due to soaring RAM usage. The same is (was?) true of 
non-rope CTFE string manipulation. Adding a relatively 
memory-intensive operation like compression isn't going to help 
in that case.


Granted, the language shouldn't have a cut-off for string length 
in template arguments, but if you load a huge string as a 
template argument, I think something has gone wrong in your code. 
Catering to that seems to me to be encouraging it, despite the 
existence of much better approaches.


The only other case I can think of where you might want a large 
string as a template argument is something like:


```
struct Foo(string s)
{
mixin(s);
}

Foo!q{...} foo;
```

But that is much better served as something like:

```
mixin template Q()
{
mixin(q{...}); // String doesn't end up in mangled name
}

struct Foo(alias A)
{
mixin A;
}

Foo!Q foo;
```

Or better yet (when possible):

```
mixin template Q()
{
... // No string mixin needed
}

struct Foo(alias A)
{
mixin A;
}

Foo!Q foo;
```

The original mangling discussion started from the need to either 
fix a mangling problem or officially discourage Voldemort types. 
Most of the ideas we've been discussing and/or working on have 
been toward keeping Voldemort types, since many here want them. 
I'm not sure what use case would actually motivate compressing 
strings/symbols.


My motivations for bootstring encoding:

* Mostly care about opDispatch, and use of __FILE__/__MODULE__ as 
compile-time parameters. Symbol bloat from their use isn't 
severe, but it could be better.

* ~50% of the current mangling size for template string parameters
* Plain C identifier strings (so, most identifiers) will end up 
directly readable in the mangled name even without a demangler
* Retains current ability to access D symbols from C (in contrast 
to ideas that would use characters like '$' or '?')
* I already needed bootstring encoding for an unrelated project, 
and figured I could offer to share it with D, since it seems like 
it would fit here, too


Re: Policy for exposing range structs

2016-03-31 Thread Anon via Digitalmars-d

On Thursday, 31 March 2016 at 16:46:42 UTC, Adam D. Ruppe wrote:

On Thursday, 31 March 2016 at 16:38:59 UTC, Anon wrote:
I've been spending my D time thinking about potential changes 
to how template string value parameters are encoded.



How does it compare to simply gzipping the string and writing 
it out with base62?


My encoding is shorter in the typical use case, at least when 
using xz instead gzip. (xz was quicker/easier to get raw 
compressed data without a header.)


1= Raw UTF-8, 2= my encoder, 3= `echo -n "$1" | xz -Fraw | base64`

---
1. some_identifier
2. some_identifier_
3. AQA0c29tZV9pZGVudGlmaWVyAA==

1. /usr/include/d/std/stdio.d
2. usrincludedstdstdiod_jqacdhbd
3. AQAZL3Vzci9pbmNsdWRlL2Qvc3RkL3N0ZGlvLmQa

1. Hello, World!
2. HelloWorld_0far4i
3. AQAMSGVsbG8sIFdvcmxkIQA=

1. こんにちは世界
2. XtdCDr5mL02g3rv
3. AQAU44GT44KT44Gr44Gh44Gv5LiW55WMAA==
---

The problem is that compression isn't magical, and a string needs 
to be long enough and have enough repetition to compress well. If 
it isn't, compression causes the data to grow, and base64 
compounds that. For the sake of fairness, let's also do a larger 
(compressible) string.


Input: 1000 lines, each with the text "Hello World"

1. 12000 bytes
2. 12008 bytes
3. 94 bytes

However, my encoding is still fairly compressible, so we *could* 
route it through the same compression if/when a symbol is 
determined to be compressible. That yields 114 bytes.


The other thing I really like about my encoder is that plain C 
identifiers are left verbatim visible in the result. That would 
be especially nice with, e.g., opDispatch.


Would a hybrid approach (my encoding, optionally using 
compression when it would be advantageous) make sense? My encoder 
already has to process the whole string, so it could do some sort 
of analysis to estimate how compressible the result would be. I 
don't know what that would look like, but it could work.


Alternately, we could do the compression on whole mangled names, 
not just the string values, but I don't know how desirable that 
is.


Re: Policy for exposing range structs

2016-03-31 Thread Anon via Digitalmars-d

On Thursday, 31 March 2016 at 11:15:18 UTC, Johan Engelen wrote:

Hi Anon,
  I've started implementing your idea. But perhaps you already 
have a beginning of an implementation? If so, please contact me 
:)

https://github.com/JohanEngelen

Thanks,
  Johan


No, I haven't started implemented things for that idea. The 
experiments I did with it were by manually altering mangled names 
in Vim.


I've been spending my D time thinking about potential changes to 
how template string value parameters are encoded. My code is a 
bit messy (and not integrated with the compiler at all), but I 
use a bootstring technique (similar to Punycode[1]) to encode 
Unicode text using only [a-zA-Z0-9_].


The results are always smaller than base16 and base64 encodings. 
For plain ASCII text, the encoding tends to grow by a small 
amount. For text containing larger UTF-8 code points, the 
encoding usually ends up smaller than the raw UTF-8 string.


A couple examples of my encoder at work:

---
some_identifier
some_identifier_

/usr/include/d/std/stdio.d
usrincludedstdstdiod_jqacdhbd

Hello, World!
HelloWorld_0far4i


こんにちは世界 (UTF-8: 21 bytes)
XtdCDr5mL02g3rv (15 bytes)
---

I still need to clean up the encoder/decoder and iron out some 
specifics on how this could fit into the mangling, but I should 
have time to work on this some more later today/tomorrow.


[1]: https://en.wikipedia.org/wiki/Punycode


Re: Attribute inference for non-templated functions

2016-03-30 Thread Anon via Digitalmars-d

On Wednesday, 30 March 2016 at 15:26:21 UTC, Seb wrote:

On Wednesday, 30 March 2016 at 12:57:56 UTC, Mathias Lang wrote:
My question is whether this is just an open issue (I couldn't 
find it) or a design decision?


It's a design decision. You want to be able to fix the exact 
type of your function, in order to provide headers for them 
for example (so you can work with libraries for which the 
source code is not available).


If you want attribute inference, you can either make it a 
dummy template or, with a recent enough compiler, use `auto` 
return type.


OK so it makes sense to recommend to always use `auto` for 
non-templated functions in high-level parts of Phobos?


No. Non-templated public API should make it clear what guarantees 
it has. The easiest way to do this is explicitly mark any @safe, 
pure, etc, just as you explicitly mark the type.


The current guideline recommends to specify the return type for 
better readability in the code and documentation, but I guess 
the latter can be eventually fixed and having automatic 
attribute inference is way more important than the first point?


Being able to tell what guarantees a method has (and ensure they 
can't accidentally be broken in updates) is way more important 
than automatic inference. If you use automatic inference for a 
public API, you then need to write unittests to make sure it 
stays @safe, pure, etc. If you mark things explicitly, the 
compiler does those tests for you at compile-time.




(given that most of Phobos makes extensive use of templates, 
this shouldn't be a huge issue anyway)


Templates have little choice but to infer attributes. A bit more 
rigor in public-facing non-templates is affordable and desirable.




Re: char array weirdness

2016-03-28 Thread Anon via Digitalmars-d-learn

On Monday, 28 March 2016 at 23:06:49 UTC, Anon wrote:

Any because you're using ranges,


*And because you're using ranges,




Re: char array weirdness

2016-03-28 Thread Anon via Digitalmars-d-learn

On Monday, 28 March 2016 at 22:49:28 UTC, Jack Stouffer wrote:

On Monday, 28 March 2016 at 22:43:26 UTC, Anon wrote:

On Monday, 28 March 2016 at 22:34:31 UTC, Jack Stouffer wrote:

void main () {
import std.range.primitives;
char[] val = ['1', '0', 'h', '3', '6', 'm', '2', '8', 
's'];

pragma(msg, ElementEncodingType!(typeof(val)));
pragma(msg, typeof(val.front));
}

prints

char
dchar

Why?


Unicode! `char` is UTF-8, which means a character can be from 
1 to 4 bytes. val.front gives a `dchar` (UTF-32), consuming 
those bytes and giving you a sensible value.


But the value fits into a char;


The compiler doesn't know that, and it isn't true in general. You 
could have, for example, U+3042 in your char[]. That would be 
encoded as three chars. It wouldn't make sense (or be correct) 
for val.front to yield '\xe3' (the first byte of U+3042 in UTF-8).



a dchar is a waste of space.


If you're processing Unicode text, you *need* to use that space. 
Any because you're using ranges, it is only 3 extra bytes, 
anyway. It isn't going to hurt on modern systems.


Why on Earth would a different type be given for the front 
value than the type of the elements themselves?


Unicode. A single char cannot hold a Unicode code point. A single 
dchar can.


Re: char array weirdness

2016-03-28 Thread Anon via Digitalmars-d-learn

On Monday, 28 March 2016 at 22:34:31 UTC, Jack Stouffer wrote:

void main () {
import std.range.primitives;
char[] val = ['1', '0', 'h', '3', '6', 'm', '2', '8', 's'];
pragma(msg, ElementEncodingType!(typeof(val)));
pragma(msg, typeof(val.front));
}

prints

char
dchar

Why?


Unicode! `char` is UTF-8, which means a character can be from 1 
to 4 bytes. val.front gives a `dchar` (UTF-32), consuming those 
bytes and giving you a sensible value.


Re: Policy for exposing range structs

2016-03-26 Thread Anon via Digitalmars-d
On Saturday, 26 March 2016 at 05:22:56 UTC, Andrei Alexandrescu 
wrote:

On 03/25/2016 11:40 AM, Steven Schveighoffer wrote:

On 3/25/16 11:07 AM, Andrei Alexandrescu wrote:

On 3/25/16 10:07 AM, Steven Schveighoffer wrote:


We should actually be moving *away from* voldemort types:

https://forum.dlang.org/post/n96k3g$ka5$1...@digitalmars.com



Has this bug been submitted? -- Andrei


I'm not sure it's a bug that can be fixed. It's caused by the 
design of

the way template name mangling is included.

I can submit a general "enhancement", but I don't know what it 
would

say? Make template mangling more efficient? :)

I suppose having a bug report with a demonstration of why we 
should

change it is a good thing. I'll add that.

-Steve


Compression of template names. -- Andrei


Would literally not help. The problem in the bug report is 
recursive expansion of templates creating mangled name length 
O(2^n) where n is the number of recursive calls. If you compress 
that to an eighth of its size, you get O(2^(n-3)), which isn't 
actually fixing things, as that is still O(2^n). The 
(conceptually) simple change I suggested brings the mangled name 
length down to O(n).


You could compress *that*, but then you are compressing such a 
small amount of data most compression algorithms will cause the 
size to grow, not shrink.


Re: Policy for exposing range structs

2016-03-25 Thread Anon via Digitalmars-d
On Friday, 25 March 2016 at 18:20:12 UTC, Steven Schveighoffer 
wrote:

On 3/25/16 12:18 PM, H. S. Teoh via Digitalmars-d wrote:
On Fri, Mar 25, 2016 at 11:40:11AM -0400, Steven Schveighoffer 
via Digitalmars-d wrote:

On 3/25/16 11:07 AM, Andrei Alexandrescu wrote:

On 3/25/16 10:07 AM, Steven Schveighoffer wrote:


We should actually be moving *away from* voldemort types:

https://forum.dlang.org/post/n96k3g$ka5$1...@digitalmars.com



Has this bug been submitted? -- Andrei


I'm not sure it's a bug that can be fixed. It's caused by the 
design

of the way template name mangling is included.

I can submit a general "enhancement", but I don't know what 
it would

say?  Make template mangling more efficient? :)

I suppose having a bug report with a demonstration of why we 
should

change it is a good thing. I'll add that.

[...]

We've been talking about compressing template symbols a lot 
recently,
but there's a very simple symbol size reduction that we can do 
right
now: most of the templates in Phobos are eponymous templates, 
and under
the current mangling scheme that means repetition of the 
template name
and the eponymous member in the symbol.  My guess is that most 
of the 4k
symbol bloats come from eponymous templates. In theory, a 
single
character (or something in that vicinity) ought to be enough 
to indicate
an eponymous template. That should cut down symbol size 
significantly
(I'm guessing about 30-40% reduction at a minimum, probably 
more in
practice) without requiring a major overhaul of the mangling 
scheme.


I don't think it's that simple. For example:

auto foo(T)(T t)

Needs to repeat T (whatever it happens to be) twice -- once for 
the template foo, and once for the function parameter. If foo 
returns an internally defined type that can be passed to foo:


x.foo.foo.foo.foo

Each nesting multiplies the size of the symbol by 2 (at least, 
maybe even 3). So it's exponential growth. Even if you compress 
it to one character, having a chain of, say, 16 calls brings 
you to 65k characters for the symbol. We need to remove the 
number of times the symbol is repeated, via some sort of 
substitution.


Added the bug report. Take a look and see what you think.

https://issues.dlang.org/show_bug.cgi?id=15831

-Steve


These repetitions could be eliminated relatively easily (from a 
user's perspective, anyway; things might be more difficult in the 
actual implementation).


Two changes to the mangling:

1) `LName`s of length 0 (which currently cannot exist) mean to 
repeat the previous `LName` of the current symbol.


2) N `Number` is added as a valid `Type`, meaning "Type Back 
Reference". Basically, all instances of a 
struct/class/interface/enum type in a symbol's mangling get 
counted (starting from zero), and subsequent instances of that 
type can be referred to by N0, N1, N2, etc.


So given:

```
module mod;
struct Foo;
Foo* func(Foo* a, Foo* b);
```

`func` currently mangles as:
_D3mod4funcFPS3mod3FooPS3mod3FooZPS3mod3Foo

It would instead be mangled as:
_D3mod4funcFPS3mod3FooPN0ZPN0

Nested templates declarations would get numbered depth first as 
follows:


S7!(S2!(S0, S1), S6!(S3, S5!(S4)))

I have another idea for reducing the byte impact of template 
string value parameters, but it is a bit more complicated and I 
need to finish de-bugging and optimizing some code to make sure 
it will work as well as I think. I'll post more on that soon, I 
suspect.


Re: Beta D 2.071.0-b1

2016-03-24 Thread Anon via Digitalmars-d-announce

On Friday, 25 March 2016 at 00:55:53 UTC, deadalnix wrote:

On Thursday, 24 March 2016 at 10:52:44 UTC, Martin Nowak wrote:

On 03/24/2016 03:00 AM, deadalnix wrote:
No bug report for it, but a PR: 
https://github.com/deadalnix/pixel-saver/pull/53


That seems unrelated. Bugfixes should simply go into stable 
for them to be released.


Unrelated to what ? It is a type system breaking bug, I think 
it is worth merging.


Unrelated to D? Double check your link. I don't think Martin can 
do anything about that pull request.


Re: Is it safe to use 'is' to compare types?

2016-03-08 Thread Anon via Digitalmars-d-learn

On Tuesday, 8 March 2016 at 20:26:04 UTC, Yuxuan Shui wrote:
On Monday, 7 March 2016 at 16:13:45 UTC, Steven Schveighoffer 
wrote:

On 3/4/16 4:30 PM, Yuxuan Shui wrote:
On Friday, 4 March 2016 at 15:18:55 UTC, Steven Schveighoffer 
wrote:

[...]


Thanks for answering. But I still don't understand why 
TypeInfo would
need to be allocated. Aren't typeid() just returning 
references to the

__DxxTypeInfo___initZ symbol?


You misunderstood, I meant the typeinfo *for* an allocated 
object, not that the typeinfo was allocated.


In some cases, 2 different objects allocated from different 
libraries (usually DLL-land) may reference TypeInfo from 
different segments, even though the TypeInfo is identical.


-Steve


Hmm... Does that mean each DLL will have their own TypeInfo 
symbols for the same type?


[Note: I phrase my answer in terms of Linux shared libraries 
(*.so) because D doesn't actually have proper Windows DLL support 
yet. The same would apply to DLLs, it just feels wrong describing 
functionality that doesn't exist.]


They can, mostly due to templated types. Consider modules 
`common`, `foo`, and `bar` (all built as shared libraries), and 
`main` (built as an executable).


module common; // => common.so
struct List(T)
{
// ...
}

module foo; // => foo.so, links to common.so
import common;

List!int getList()
{
// ...
}

module bar; // => bar.so, links to common.so
import common

void processList(List!int a)
{
// ...
}

module main; // => main, links to foo.so, bar.so, and common.so
import foo, bar;

void main()
{
processList(getList());
}

No part of List!int is instantiated in common, so no part of it 
is actually present in common.so. Instead, it is instantiated in 
foo and bar, and thus separate copies of List!int are present in 
foo.so and bar.so, along with TypeInfo for List!int.


If you were to statically link instead (using .a or .lib files), 
the linker would keep only one copy of List!int and its TypeInfo, 
but the linker can't eliminate either of them when dealing with 
shared libraries.


So, yes, I think the string comparison is needed, as awkward as 
it may seem in many circumstances.


Re: If stdout is __gshared, why does this throw / crash?

2016-03-05 Thread Anon via Digitalmars-d-learn

On Saturday, 5 March 2016 at 14:18:31 UTC, Atila Neves wrote:
With a small number of threads, things work as intended in the 
code below. But with 1000, on my machine it either crashes or 
throws an exception:



import std.stdio;
import std.parallelism;
import std.range;


void main() {
stdout = File("/dev/null", "w");
foreach(t; 1000.iota.parallel) {
writeln("Oops");
}
}


Note that `1000.iota.parallel` does *not* run 1000 threads. 
`parallel` just splits the work of the range up between the 
worker threads (likely 2, 4, or 8, depending on your CPU). I see 
the effect you describe with any parallel workload. Smaller 
numbers in place of 1000 aren't necessarily splitting things off 
to additional threads, which is why smaller numbers avoid the 
multi-threaded problems you are encountering.


I get, depending on the run, "Bad file descriptor", "Attempting 
to write to a closed file", or segfaults. What am I doing wrong?


Atila


`File` uses ref-counting internally to allow it to auto-close. 
`stdout` and friends are initialized in a special way such that 
they have a high initial ref-count. When you assign a new file to 
stdout, the ref count becomes one. As soon as one of your threads 
exits, this will cause stdout to close, producing the odd errors 
you are encountering on all the other threads.


I would avoid reassigning `stdout` and friends in favor of using 
a logger or manually specifying the file to write to if I were 
you.


Re: Safe cast of arrays

2016-02-10 Thread Anon via Digitalmars-d
On Wednesday, 10 February 2016 at 20:14:29 UTC, Chris Wright 
wrote:


Show a way to read or write outside allocated memory with this, 
or to cause a segmentation fault, and that will require a 
change in @safe. You're looking for something else, data safety 
rather than memory safety. You want to disallow unions and 
anything that lets you emulate them.


Does this count?

http://dpaste.dzfl.pl/96db07a5104e


Re: DIP87: Enhanced Foreign-Language Binding

2016-01-22 Thread Anon via Digitalmars-d

On Friday, 22 January 2016 at 16:37:31 UTC, Jacob Carlborg wrote:

On 2016-01-21 05:21, Anon wrote:
Seeing the recent extern(C++) threads, and much concern 
therein, I'd

like to propose DIP87: http://wiki.dlang.org/DIP87

Destroy to your heart's content.


* How do you plan to differentiate specifying the namespace 
compared with specifying the mangled name for an extern(C++) 
symbol?


Ideally, by whether the `extern()` forms a block or it is 
attached directly to the symbol. I understand that wouldn't work 
with existing implementation in the compiler, but hopefully it 
wouldn't be too difficult to do. But I know nothing of compiler 
internals so am probably wrong.


* For Objective-C bindings one needs to be able to specify the 
mangled name for a class or interface without affecting the 
names of the methods. It's required because in Objective-C it's 
possible to have the same name for a class and a protocol 
(interface). In the Foundation framework (standard library) 
there are a class "Object" and a protocol "Object"


I'll have to look into this more. My cursory reading told me of 
"instance" methods and "class" methods, that get mangled as 
"_i__..." and "_c__..." 
respectively. Is this what you are talking about?


* When creating Objective-C bindings it's basically required to 
specify the selector for every method


I know, because of the parameter names being part of the API (and 
mangled name). I still think the approach described in the DIP 
should be workable.


* In Objective-C the mangled name and the selector is not the 
same. The mangled name is for the linker to find the correct 
symbol. The selector is for finding the correct method 
implementation at runtime


Sure, but when using ObjC code from D:

extern(Objective-C) class Something {
void moveTo(float x, float y) @selector("x:y:");
}

// Elsewhere
something.moveTo(1,2);

I don't see how the selector is doing anything more than mangling 
here. Even if you have multiple methods in D that mangle to the 
method name in ObjC (minus parameters), that selection is done by 
D, which then mangles the name according to the selector, right? 
If not, do you have an example handy to illustrate?


Re: extern(C++, ns)

2016-01-20 Thread Anon via Digitalmars-d

On Thursday, 21 January 2016 at 01:37:12 UTC, Walter Bright wrote:

On 1/20/2016 4:51 PM, Anon wrote:
What would you all say to the following proposal (and should I 
make a DIP?)


DIPs are always welcome.


Done.

http://forum.dlang.org/post/ldtluvnhuznvbebcb...@forum.dlang.org


DIP87: Enhanced Foreign-Language Binding

2016-01-20 Thread Anon via Digitalmars-d
Seeing the recent extern(C++) threads, and much concern therein, 
I'd like to propose DIP87: http://wiki.dlang.org/DIP87


Destroy to your heart's content.


Re: DIP87: Enhanced Foreign-Language Binding

2016-01-20 Thread Anon via Digitalmars-d
On Thursday, 21 January 2016 at 04:42:00 UTC, Rikki Cattermole 
wrote:

On 21/01/16 5:21 PM, Anon wrote:
Seeing the recent extern(C++) threads, and much concern 
therein, I'd

like to propose DIP87: http://wiki.dlang.org/DIP87

Destroy to your heart's content.


It was great until I saw:
extern(auto, "myMoveTo:")

After all:
extern(C/C++/D/Objective-C[, string])

Is that string meant for raw mangling or effect mangling in the 
form of selector?


Just no, leave @selector alone I think.


I don't know ObjC, so I had to wing it on the details there. The 
strings in
extern(, "str") would get sent through Foo's mangler. For 
ObjC, I currently imagine those strings forming the selector, 
much the same way it is specified through @selector. That ObjC's 
mangling mostly consists of 's/:/_/g' is irrelevant. I want *all* 
language binding to happen with a uniform interface. No more 
one-off hacks for a particular language (which is exactly what 
extern(C++,ns) and @selector are).




You have the same problem with c++ namespaces.


I don't see a problem. You'll have to be more specific.



Perhaps this is slightly wrong.
extern(string)
Is the only way to force a specific mangling.


There is no extern(string) in this proposal, nor is there a way 
to force a specific mangling, which (AFAIK) was only introduced 
to allow linking to C symbols that happened to be keywords.




Where as extern(C/C++/D/Objective-C[, string])
with the string altering in C++ and Objective-C mode.


It mangles regardless. Any and all of the extern() modes 
mangle. That C's mangling is to just return the input string is 
irrelevant.



So the only difference is extern(string) vs pragma(mangle, 
string)
Little harder sell, but I think might be worth it for cleaning 
up the language.


The difference is that it can mangle symbols correctly, even if 
the symbol is a D keyword.


Currently:

extern(C++) pragma(mangle, "delegate") int delegate_();

...yields a mangled name of "delegate", and there is no way to 
get the compiler to mangle the symbol correctly. Meaning you have 
to (ab)use pragma(mangle) and provide it with the full mangled 
name yourself. And `version()` it appropriately to deal with 
gcc/clang vs MSVC/DMC mangling.


With this DIP:

extern(C++, "delegate") int delegate_();

... would yield a mangled name of "_Z8delegatev" (or similar).

I thought I did a good enough job of explaining that in the DIP 
so I wouldn't have to here.


Re: extern(C++, ns)

2016-01-20 Thread Anon via Digitalmars-d
What would you all say to the following proposal (and should I 
make a DIP?)



1. deprecate pragma(mangle)
2. deprecate extern(C++, ns)
3. deprecate @selector()
4. Introduce a better, more general extern() as follows:

extern (  [,  ] )

Which would *only* influence mangling and calling conventions. 
Blocks would concatenate their s, with the default value 
for a symbol being its identifier. Whatever the concatenated 
string is then gets run through a language-specific mangler with 
knowledge of the type info. It would be an error for nesting 
blocks to change language. The content of the string would depend 
on the language in question. This would be also extendable beyond 
C, C++, D, and Objective-C to other languages if so desired 
(Rust, Go, C#, etc.) while keeping a uniform syntax and behavior 
regardless of the language being bound.


Some examples:

extern(C) int foo(); // Mangled as "foo"

extern(C, "body") int body_(); // "body"

extern(C++) int foo(); // cppMangleOf("foo")

extern(C++, "body") int body_(); // cppMangleOf("body")

extern(D) int foo(); // "_D3fooFZi" -no module

extern(D, "body") int body_; // "_D4bodyFZi" -no module

extern(C++, "ns::foo") int foo(); // cppMangleOf("ns::foo")

extern(C++, "ns::")
{
int foo(); // cppMangleOf("ns::foo")

extern(C++, "body") int body_(); // cppMangleOf("ns::body")

// I'm unsure of the next two. Both need to be inside an
// extern() block and would infer the 
// extern("with") int with_(); // cppMangleOf("ns::with")
// extern(auto, "with") int with_(); // 
cppMangleOf("ns::with")

}

extern(C, "SDL_")
{
void init(); // "SDL_init"
}

extern(D, "std.ascii.")
{
// std.ascii.isAlphaNum.mangleof
bool isAlphaNum(dchar) pure nothrow @nogc @safe bool;
}


Re: rval->ref const(T), implicit conversions

2016-01-18 Thread Anon via Digitalmars-d

On Monday, 18 January 2016 at 19:32:19 UTC, bitwise wrote:

struct S;

void func(ref S s);
func(S());   // FINE

void func(ref S s) @safe;
func(S());   // ERROR


Isn't that backwards? I mean, @safe functions can't escape their 
parameters, so whether or not it is a temporary shouldn't matter 
to a @safe function. Meanwhile, non-@safe *can* escape 
parameters, and would fail or at least lead to problems if it 
tried to escape a ref to a temporary.


On the other hand, banning @safe code from passing a temporary as 
a ref parameter, while allowing it in non-@safe code makes a bit 
more sense to me, but seems less desirable.


Re: DStep 0.2.1

2016-01-17 Thread anon via Digitalmars-d-announce

On Sunday, 17 January 2016 at 11:16:50 UTC, Jacob Carlborg wrote:

On Sunday, 17 January 2016 at 04:05:31 UTC, anon wrote:


[Snip]


libclang.dylib needs to either be in the same directory as 
DStep or in any of standard library search paths. $PATH is not 
searched in for libraries. I think the environment variable 
you're looking for is $DYLD_LIBRARY_PATH. The standard search 
paths for libraries are /usr/lib and /usr/local/lib.


--
/Jacob Carlborg


Thank you. That helped greatly.


Re: DStep 0.2.1

2016-01-16 Thread anon via Digitalmars-d-announce
On Saturday, 16 January 2016 at 19:16:26 UTC, Jacob Carlborg 
wrote:
On 2016-01-16 20:01, Russel Winder via Digitalmars-d-announce 
wrote:


Trying the Debian build on Debian Sid, I still have the 
libclang.so
problem, I have shown the list of things there are below. 
Creating a

hack symbolic link I got it to work.


I've built the DStep against libclang provided by LLVM from 
here [1]. It's easier to test multiple versions of libclang 
that way. But I guess I could build the final binary against 
the system provided libclang.


[1] http://llvm.org/releases/index.html


Help me with this please. Not sure what I'm doing wrong. 
Attempting to compile from code I get:


anon@gwave ~/g/dstep> dub build
Performing "debug" build using dmd for x86_64.
tango 1.0.3+2.068: target for configuration "static" is up to 
date.

mambo 0.0.7: target for configuration "library" is up to date.
dstack 0.0.4: target for configuration "library" is up to date.
dstep 0.1.1+commit.29.g015bd59: building configuration 
"default"...
../../.dub/packages/mambo-0.0.7/mambo/util/Traits.d(154,39): 
Deprecation: typedef is removed
../../.dub/packages/mambo-0.0.7/mambo/util/Traits.d(182,30): 
Deprecation: typedef is removed

Linking...
ld: library not found for -lclang
clang: error: linker command failed with exit code 1 (use -v to 
see invocation)

--- errorlevel 1
dmd failed with exit code 1.

So I download the pre-build binary. And now I get:

dmd failed with exit code 1.
anon@gwave ~/g/dstep> ~/Downloads/dstep
dyld: Library not loaded: @rpath/libclang.dylib
  Referenced from: /Users/anon/Downloads/dstep
  Reason: image not found

I've got clang installed:

anon@gwave ~/g/dstep> clang -v
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.2.0
Thread model: posix

And the path to the lib appended to $PATH environment variable 
(do sure how to pass this to dub):


anon@gwave ~/g/dstep> mdfind -name libclang
/Library/Developer/CommandLineTools/usr/lib/libclang.dylib

anon@gwave ~/g/dstep> echo $PATH
/usr/local/bin /usr/bin /bin /usr/sbin /sbin 
/Library/Developer/CommandLineTools/usr/lib/


Thanks


Re: Voldemort Type Construction Error

2016-01-15 Thread Anon via Digitalmars-d-learn

On Friday, 15 January 2016 at 14:04:50 UTC, Nordlöw wrote:

What have I missed?


In line 126, `static struct Result()` is a template. Either drop 
the parens there, or change the call on line 187 to 
`Result!()(haystack, needles)`.


Re: Some feedback on the website.

2015-12-17 Thread Anon via Digitalmars-d

On Thursday, 17 December 2015 at 20:04:44 UTC, jmh530 wrote:

My feedback: add the ability to edit posts in the forum


You can't edit email.


Re: switch with enum

2015-11-25 Thread Anon via Digitalmars-d-learn

On Wednesday, 25 November 2015 at 21:26:09 UTC, Meta wrote:

On Wednesday, 25 November 2015 at 20:47:35 UTC, anonymous wrote:
Use `final switch`. Ordinary `switch`es need an explicit 
default case. `final switch`es have to cover all possibilities 
individually. Implicit default cases are not allowed.



Ordinary `switch`es need an explicit default case


Since when?


Non-final switch without a default case was deprecated in 2011: 
http://dlang.org/changelog/2.054.html





Re: dchar undefined behaviour

2015-10-23 Thread Anon via Digitalmars-d
On Friday, 23 October 2015 at 21:22:38 UTC, Vladimir Panteleev 
wrote:
That doesn't sound right. In fact, this puts into question why 
dchar.max is at the value it is now. It might be the current 
maximum at the current version of Unicode, but this seems like 
a completely pointless restriction that breaks 
forward-compatibility with future Unicode versions, meaning 
that D programs compiled today may be unable to work with 
Unicode text in the future because of a pointless artificial 
limitation.


Unless UTF-16 is deprecated and completely removed from all 
systems everywhere, there is no way for Unicode Consortium to 
increase the limit beyond U+10. That limit is not arbitrary, 
but based on the technical limitations of what UTF-16 can 
actually represent. UTF-8 and UTF-32 both have room for 
expansion, but have been defined to match UTF-16's limitations.


Re: Pathing in the D ecosystem is generally broken (at least on windows)

2015-09-25 Thread anon via Digitalmars-d

On Saturday, 26 September 2015 at 01:37:57 UTC, Manu wrote:
On 25 September 2015 at 22:17, Kagamin via Digitalmars-d 
 wrote:

[...]


This is because I am constantly introducing new users to D, and 
even

more important when those users are colleagues in my workplace.
If I talk about how cool D is, then point them at the website 
where

they proceed to download and install the compiler, and their
experience is immediately hindered by difficulty to configure 
within

seconds of exposure, this paints a bad first impression, and
frustratingly, it also reflects badly on *me* for recommending 
it;
they're mostly convinced I'm some ridiculous fanboy (they're 
probably

right). This is based exclusively on their experience and
first-impressions. These basic things really matter!

Understand; people with no vested interest in D, and likely some
long-term resistance to every new trend in the software world 
jumping
up and down fighting for their attention (which includes 
fanboys like
me!), will not be impressed unless the experience is efficient 
and

relatively seamless.
I'm talking about appealing to potential end-users, not 
enthusiasts.
My experience is, over and over again, for years now, that 
these tiny
little things **REALLY MATTER**, more than literally anything 
else. If
they're turned away by first impressions, then literally 
nothing else
matters, and you rarely get a second chance; people don't tend 
to

revisit something they've written off in the past.


They just don't care. This is what I think when I read this. If 
it's not the setup it would be something else. They would find 
something else to mask their uninterest. Human beings are 
talented at lying to themselves.


They're just not honest enough with themselves, it's that simple. 
Don't be so gullible and try to understand what's behind the 
excuses !


Re: building Windows kernel-mode drivers in D

2015-09-25 Thread anon via Digitalmars-d

On Friday, 25 September 2015 at 15:17:02 UTC, Cauterite wrote:
The prospect of writing kernel-mode drivers in D doesn't seem 
to get mentioned very often. I know there's been some attempts 
in the past at running D in the kernel, such as XOmB, but no 
mention of doing so on Windows.
I gave it a shot, and it seems to be perfectly feasible, so I 
thought I should share my findings for the benefit of anyone 
else who decides to explore this route.


[...]


Interesting. You could put something in the wiki if you succeed, 
e.g 'how to make a windows driver' with a small description for 
each step.


Re: Beta D 2.068.0-b1

2015-07-14 Thread Anon via Digitalmars-d-announce

On Tuesday, 14 July 2015 at 05:40:52 UTC, Jacob Carlborg wrote:

On 2015-07-14 00:11, Andrew Edwards wrote:

I did notice that I can no longer create folders or links in 
the

/usr/bin || /usr/lib || /usr/share directory or any subs there.
I can however do so in /usr/local/* which is where I've 
installed

the contents of dmd.2.068.0-b1. All seems to work fine.


Hmm, not even using sudo?


Not even with sudo.


Re: Beta D 2.068.0-b1

2015-07-14 Thread Anon via Digitalmars-d-announce

On Tuesday, 14 July 2015 at 05:41:31 UTC, Jacob Carlborg wrote:

On 2015-07-14 00:03, Andrew Edwards wrote:


   The installer encountered an error that caused the
 installation to fail. Contact the software manufacturer
 for assistance.


Do you get some more information, perhaps in the Console?


Nothing else. After this message, I press close and the GUI 
disappears.


Re: bigint compile time errors

2015-07-02 Thread Anon via Digitalmars-d-learn

On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote:
The following code fails to compile and responds with the given 
error message. Varying the plusTwo function doesn't work; as 
long as there is an arithmetic operation the error occurs.


This works for me on OSX 10.10 (Yosemite) using DMD64 D Compiler 
v2.067.1.


It seems to mean that there is no way to modify a BigInt at 
compile time. This seriously limits the usability of the type.


enum BigInt test1 = BigInt(123);
enum BigInt test2 = plusTwo(test1);

public static BigInt plusTwo(in bigint n)


Should be plusTwo(in BigInt n) instead.


{
return n + 2;
}

void main()
{
}





Re: OctoDeskdex: D language program for your pleasure

2015-06-18 Thread anon via Digitalmars-d

On Tuesday, 16 June 2015 at 00:47:59 UTC, Ramon wrote:

Hi folks

Just want to share some insights about what you can achieve 
with the D programming language and the Sciter library, which I 
use for making the GUI interface.


I've written the OctoDeskdex app, which is basically the 
Github's Octodex page in a desktop version. Source is available 
at: https://github.com/midiway/OctoDeskdex


It uses my https://github.com/midiway/sciter-dport library 
which let you Sciter technology 
http://www.terrainformatica.com/sciter/main.whtm in D language.


If anyone is interested in this kind of GUI library, I can 
write more about how to use it


regards


Great


Re: ldc std.getopt

2015-04-29 Thread Anon via Digitalmars-d-learn

On Wednesday, 29 April 2015 at 19:43:44 UTC, Laeeth Isharc wrote:
I get the following errors under LDC (this is LDC beta, but 
same problem under master) although the code compiles fine 
under DMD.


Am I doing something wrong?


The help generating feature of std.getopt is new in 2.067. Use 
branch merge-2.067 for that. Otherwise, don't use std.getopt's 
help generation just yet.


Re: DIP75 - Release Process

2015-03-12 Thread Anon via Digitalmars-d
On Wednesday, 11 March 2015 at 07:19:57 UTC, Vladimir Panteleev 
wrote:
What is indubitably, actually, very important, and something 
I'm surprised you haven't pushed for since long ago, is making 
it EASY to get more things. Dub absolutely must be a part of D, 
and not today but one or more years ago. There is now a rift in 
this community, between people who use code.dlang.org and its 
packages, and those who do not.


And those of us who don't use dub are *not* going to magically
start using dub just because it is bundled with dmd. I don't use
dub because it doesn't benefit me in any way, and really only
gets in my way.

Coming from a language with a package manager, and then trying 
to build a project with a dozen dependencies by manually 
cloning the repositories and making sure they are the correct 
version, is madness. A package manager encourages people to 
build many small reusable components, because the overhead of 
managing each component becomes very small, and this is 
something we really want.


And any package manager that only operates in source, demands
a central repository (that effectively just redirects to the
actual Git repos), and only works for one language is utterly
worthless for real world projects.

Not to mention, putting extra tools like dustmite and dub in dmd
will only ever benefit dmd users, not those of us who use ldc or
gdc.

Ignoring that for a moment, where does it stop? Do we include an
editor? [sarcasm] Why not? Every D developer needs to edit their
code! Let's go ahead and call Eclipse+DDT the standard D editor,
and bundle that with dmd. [/sarcasm]


Re: DIP75 - Release Process

2015-03-12 Thread Anon via Digitalmars-d

On Thursday, 12 March 2015 at 07:44:01 UTC, Jacob Carlborg wrote:

On 2015-03-11 17:27, Anon wrote:

Ignoring that for a moment, where does it stop? Do we include 
an
editor? [sarcasm] Why not? Every D developer needs to edit 
their
code! Let's go ahead and call Eclipse+DDT the standard D 
editor,

and bundle that with dmd. [/sarcasm]


I don't see why not. Both Microsoft and Apple ship an IDE with 
their SDK's.


That's an IDE that includes a compiler, not a compiler that 
includes an IDE. You aren't downloading cl, you're downloading 
VisualStudio. That you also get cl is an implementation detail. 
If Bruno wanted to release a build of Eclipse+DDT that came with 
a compiler, I'd have no problem with that.


Re: DIP75 - Release Process

2015-03-11 Thread Anon via Digitalmars-d
On Wednesday, 11 March 2015 at 16:35:22 UTC, Andrei Alexandrescu 
wrote:

On 3/11/15 9:27 AM, Anon wrote:
Not to mention, putting extra tools like dustmite and dub in 
dmd
will only ever benefit dmd users, not those of us who use ldc 
or

gdc.


That's entirely reasonable. Each distribution has the freedom 
to bundle whichever tools it finds fit.


My point with that (that I forgot to actually type) was that I
feel there would be better mileage if D tools were packaged up
and provided apart from the compiler. Then the same tools set
can be used regardless of compiler choice, and (perhaps more
importantly) update independently of DFE updates. Some tools
are dependent on the compiler being used, and wouldn't work for
independent distribution, but for the others, it makes more
sense (to me anyway) to make that a separate download.
Of course, installers could be set up to also download that
zip if desired.

By way of example, I'd expect clang-format to be bundled with
clang, but I wouldn't expect (or want) valgrind to be bundled
with clang or gcc. I could however, see the value of a single
download that included valgrind and astyle.

I would agree it would be bad if dustmite and dub were 
locked-in to only work with dmd. Is that the case?


Not to my knowledge, but binary releases for most dmd tools are
only available with dmd, which is not ideal. It also creates
a potential ambiguity, since dmd is not redistributable
without explicit permission from Walter, but most of the tools
included with dmd are. Separating the tools from the compiler
allows a very easy line to be drawn between what is and might
not be redistributable.


Re: Initializing defaults based on type.

2015-03-07 Thread anon via Digitalmars-d-learn

On Friday, 6 March 2015 at 16:04:33 UTC, Benjamin Thaut wrote:

On Friday, 6 March 2015 at 15:36:47 UTC, anon wrote:

Hi,

I can't figure this out.

struct Pair(T)
{
  T x;
  T y;

  alias x c;
  alias y r;
}

What would like is that the x and y to be initialized to 
different values depending on type eg:


struct Container
{
 Pair!double sample1; // This will initialize sample1 with 0 
for both x and y
 Pair!intsample2; // This will initialize sample2 with 1 
for both x and y

}

currently I'm using two different struct one with doubles and 
the other with ints and initialized with default value but was 
wondering if its possible to do the above.


anon


struct Pair(T)
{
 static if(is(T == int))
   enum int initValue = 1;
 else
   enum T initValue = 0;

   T x = initValue;
   T y = initValue;

   alias x c;
   alias y r;
}


Thanks


Initializing defaults based on type.

2015-03-06 Thread anon via Digitalmars-d-learn

Hi,

I can't figure this out.

struct Pair(T)
{
   T x;
   T y;

   alias x c;
   alias y r;
}

What would like is that the x and y to be initialized to 
different values depending on type eg:


struct Container
{
  Pair!double sample1; // This will initialize sample1 with 0 for 
both x and y
  Pair!intsample2; // This will initialize sample2 with 1 for 
both x and y

}

currently I'm using two different struct one with doubles and the 
other with ints and initialized with default value but was 
wondering if its possible to do the above.


anon





How can I convert the following C to D.

2015-01-21 Thread anon via Digitalmars-d-learn

I have the following C code, how can I do the same in D.

Info **info;
info = new Info*[hl + 2];

int r;
for(r = 0; r  hl; r++)
{
info[r] = new Info[vl + 2];
}
info[r] = NULL;

anon


Re: How can I convert the following C to D.

2015-01-21 Thread anon via Digitalmars-d-learn
On Wednesday, 21 January 2015 at 23:59:34 UTC, ketmar via 
Digitalmars-d-learn wrote:

On Wed, 21 Jan 2015 23:50:59 +
anon via Digitalmars-d-learn 
digitalmars-d-learn@puremagic.com wrote:


On Wednesday, 21 January 2015 at 23:47:46 UTC, ketmar via 
Digitalmars-d-learn wrote:

 On Wed, 21 Jan 2015 23:44:49 +
 anon via Digitalmars-d-learn 
 digitalmars-d-learn@puremagic.com wrote:


 I have the following C code, how can I do the same in D.
 
 Info **info;

 info = new Info*[hl + 2];
 
 int r;

 for(r = 0; r  hl; r++)
 {
info[r] = new Info[vl + 2];
 }
 info[r] = NULL;
 
 anon

 this is not C.

Your right its c++

so the answer to your question is very easy: just type in any
gibberish. as C cannot compile C++ code, the final result is to 
get the

code that cannot be compiled. any gibberish will do.


Great answer.

Anyway the code isn't mine I just wanted to know how to handle 
what the author wrote.


I got it working with.

auto info = new Info[][](hl, vl);

and changing the logic so as not check for the NULL.

No need on being picky it was just a question.

anon


Re: How can I convert the following C to D.

2015-01-21 Thread anon via Digitalmars-d-learn

On Thursday, 22 January 2015 at 00:16:23 UTC, bearophile wrote:

anon:


I have the following C code, how can I do the same in D.

Info **info;
info = new Info*[hl + 2];

int r;
for(r = 0; r  hl; r++)
{
info[r] = new Info[vl + 2];
}
info[r] = NULL;


I suggest you to ignore ketmar, he's not helping :-)

Is your code initializing info[r+1]?

This is roughly a D translation (untested):


void main() @safe {
import std.stdio;

enum uint hl = 5;
enum uint vl = 7;
static struct Info {}

auto info = new Info[][](hl + 2);

foreach (ref r; info[0 .. hl])
r = new Info[vl + 2];

writefln([\n%(%s,\n%)\n], info);
}


Output:

[
[Info(), Info(), Info(), Info(), Info(), Info(), Info(), 
Info(), Info()],
[Info(), Info(), Info(), Info(), Info(), Info(), Info(), 
Info(), Info()],
[Info(), Info(), Info(), Info(), Info(), Info(), Info(), 
Info(), Info()],
[Info(), Info(), Info(), Info(), Info(), Info(), Info(), 
Info(), Info()],
[Info(), Info(), Info(), Info(), Info(), Info(), Info(), 
Info(), Info()],

[],
[]
]


Is this what you look for?

Bye,
bearophile


Hi Bearophile,

It looks like what I need.

Thanks,
anon


Re: How can I convert the following C to D.

2015-01-21 Thread anon via Digitalmars-d-learn
On Wednesday, 21 January 2015 at 23:47:46 UTC, ketmar via 
Digitalmars-d-learn wrote:

On Wed, 21 Jan 2015 23:44:49 +
anon via Digitalmars-d-learn 
digitalmars-d-learn@puremagic.com wrote:



I have the following C code, how can I do the same in D.

Info **info;
info = new Info*[hl + 2];

int r;
for(r = 0; r  hl; r++)
{
info[r] = new Info[vl + 2];
}
info[r] = NULL;

anon

this is not C.


Your right its c++


Re: Worst Phobos documentation evar!

2014-12-31 Thread Anon via Digitalmars-d
On Wednesday, 31 December 2014 at 19:11:27 UTC, Walter Bright 
wrote:

On 12/31/2014 7:20 AM, Vladimir Panteleev wrote:
On Monday, 29 December 2014 at 22:39:02 UTC, Walter Bright 
wrote:

* reddit
* github


These both use Markdown. The syntax is the same, except for 
minor things, such

as the handling of newlines.


Yes, the same only different.


Just like DDoc macros and Makefile macros. They're the same, but 
different.
Also, the differences between Markdown implementations are 
trivial, and do not effect the readability of the source, which 
is the entire point of Markdown - making the plain-text 
readable, rather than polluting it with HTML (or DDoc) tag noise.



* wiki
* hackernews


Hacker News and both the new D Wiki, and the old, do not use 
Markdown.


It's just another variation of it - which is my point.


And your point is completely wrong. DDoc and Makefiles both use 
$(MACROS), does that mean that DDoc is a variation of Make?


Yes, *lots* of things use common elements. Because that makes 
things more easily understood when *reading*, which is the single 
most important thing for documentation. The macros are fine for 
when they are needed, but you shouldn't have to use gotos and 
jumps when all you want is a gorram foreach loop. Nor should you 
have to write (or read!) $(UL $(LI A) $(LI B) $(LI C)) to get a 
list.



I know that Markdown formatting is context sensitive.
And what happens if you want to have a * at the beginning of 
the line of

output?
And a | in a table entry? And so on for each of the context 
sensitive things?


A backslash. Y'know, the unambiguous, 
familiar-to-all-programmers, really-hard-to-mistype thing that 
almost everything but HTML and DDoc use for escaping?


Re: D language manipulation of dataframe type structures

2014-12-26 Thread anon via Digitalmars-d-learn

On Wednesday, 25 September 2013 at 04:35:57 UTC, lomereiter wrote:
I thought about it once but quickly abandoned the idea. The 
primary reason was that D doesn't have REPL and is thus not 
suitable for interactive data exploration.



https://github.com/MartinNowak/drepl
https://drepl.dawg.eu/


Re: Loops versus ranges

2014-12-19 Thread anon via Digitalmars-d-learn

On Friday, 19 December 2014 at 10:41:04 UTC, bearophile wrote:
A case where the usage of ranges (UFCS chains) leads to very 
bad performance:



import std.stdio: writeln;
import std.algorithm: map, join;

uint count1, count2;

const(int)[] foo1(in int[] data, in int i, in int max) {
count1++;

if (i  max) {
typeof(return) result;
foreach (immutable n; data)
result ~= foo1(data, i + 1, max);
return result;
} else {
return data;
}
}

const(int)[] foo2(in int[] data, in int i, in int max) {
count2++;

if (i  max) {
return data.map!(n = foo2(data, i + 1, max)).join;
} else {
return data;
}
}

void main() {
const r1 = foo1([1, 2, 3, 4, 5], 1, 7);
writeln(count1); // 19531
const r2 = foo2([1, 2, 3, 4, 5], 1, 7);
writeln(count2); // 111
assert(r1 == r2); // Results are equally correct.
}


Can you tell why? :-)

Bye,
bearophile


Changed to
return data.map!(n = foo2(data, i + 1, 
max)).cache.joiner.array;
then it produced the same result as array version. 
`map.cache.join` resulted in 597871.


Comparing Parallelization in HPC with D, Chapel, and Go

2014-11-21 Thread anon via Digitalmars-d


https://www.academia.edu/3982638/A_Study_of_Successive_Over-relaxation_SOR_Method_Parallelization_Over_Modern_HPC_Languages


Re: Thank you Kenji

2014-05-23 Thread Anon via Digitalmars-d

On Friday, 23 May 2014 at 06:57:06 UTC, Ali Çehreli wrote:
There is word out there that Kenji Hara and bearophile are the 
same person. (I think it is the same AI running on a powerful 
server farm. :p)


Ali


That explains why he couldn't come to DConf.


Question for Andrei

2014-05-21 Thread Anon via Digitalmars-d

I meant to ask this question during Andrei's talk at DConf, but
forgot. Dmitry Saubalausky's talk about regex made me remember,
since regex is one of the major stumbling areas when using CTFE
unless you're careful.

I heard once that you were strongly against using manifest over
enum for manifest constants. Is your stance still the same, with
all the problems that comes with using enum with ctRegex, arrays,
etc.?