Re: C++/D interface: exceptions

2014-09-11 Thread Jacob Carlborg via Digitalmars-d

On 12/09/14 02:44, David Nadlinger wrote:


Just a quick comment on this: 2) is very simple to implement for all the
compilers that actually use libunwind Dwarf 2 EH on Linux, i.e. GDC and
LDC (I think deadalnix already looked into this for SDC).


It would be nice if DMD could do the same for D exceptions.


3) is also doable, but of course significantly more annoying because you
need to deal with the internals of the exception ABI of your C++
compiler. Accessing the exception object is relatively trivial, ABI and
mangling support is slowly coming anyway, but OTOH handling the
exception lifetime correctly could become somewhat of a headache.


Is handling the exception any more of a headache compared to only using C++?

--
/Jacob Carlborg


Re: C++/D interface: exceptions

2014-09-11 Thread Jacob Carlborg via Digitalmars-d

On 12/09/14 02:35, Andrei Alexandrescu wrote:

Hello,


We are racking our brains to figure out what to do about exceptions
thrown from C++ functions into D code that calls them.

A few levels of Nirvana would go like this:

0. Undefined behavior - the only advantage to this is we're there
already with no work :o).

1. C++ exceptions may be caught only by C++ code on the call stack; D
code does the stack unwinding appropriately (dtors, scope statements)
but can't catch stuff.

2. D code can catch exceptions from C++ (e.g. via a CppException wrapper
class) and give some info on them, e.g. the what() string if any.

Making any progress on this is likely to be hard work, so any idea that
structures and simplifies the design space would be welcome.


1 and 2 would really be a help for the D/Objective-C integration as well 
since on 64bit is uses the same exception model as C++.


--
/Jacob Carlborg


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Jacob Carlborg via Digitalmars-d

On 11/09/14 21:02, eles wrote:


Could you provide one or two short but illustrative examples in Tango
and Phobos showing the howto and the why not in Phobos?


Tango:

import tango.text.Unicode;

void foo ()
{
char[3] result; // pre-allocate buffer on the stack
auto b = "foo".toUpper(result);
}

Phobos:

import std.uni;

void foo ()
{
auto b = "foo".toUpper(); // no way to use a pre-allocated buffer
}


Will Andrei's allocators improve that with some rewrite of Phobos?


Yes, they could.

--
/Jacob Carlborg


Re: alias two froms

2014-09-11 Thread eles via Digitalmars-d

On Thursday, 11 September 2014 at 23:05:19 UTC, Ali Çehreli wrote:

On 09/11/2014 12:08 PM, eles wrote:


The 'alias this = A' syntax did appear for one release as an 
unintentional feature:


Having two syntaxes for alias makes everything looking worse and 
is an unnecessary noise.


I would rather go with something like:

alias alpha~=beta;

if I would need multiple aliases (see alpha as a list of aliased 
types).






Common scope for function's in{}, out(result){} and body{}

2014-09-11 Thread Ilya Yaroshenko via Digitalmars-d

Hello!

There are local imports available in D.
But if contract programming is necessary there are no common 
function scope.


So I suppose D need common scope:

---
auto functionName(Range1, Range2)(Range1 r1, Ranger2)
scope  {
import std.range; //imports are allowed
enum MaxLength = 1024; //enums are allowed
immutable A = [0.123, 0.456, 0.789]; // immutable and const 
are allowed

bool isOdd (uint i) {return i%2;}

static assert(isInputRange!Range1,
Range1.stringof!~" is not InputRange");


int i; //Error, mutable data disallowed.
immutable length = r1.length; //Error, can't use function 
parameters.



}
in {
   assert(!r1.empty); // we already know that r1 is InputRange 
and function have local imports!


 // can use std.range, MaxLength, A, isOdd
}
out(result) {
 // can use std.range, MaxLength, A, isOdd
}
body {
 // can use std.range, MaxLength, A, isOdd
}
---


What do you think?

Best Regards,
Ilya


Re: How to build dmd with debugging symbols in Linux/x86_64?

2014-09-11 Thread Jacob Carlborg via Digitalmars-d

On 12/09/14 00:22, H. S. Teoh via Digitalmars-d wrote:


Any ideas how to work around this breakage? (Short of hacking
root/checkedint.c, which is not particularly appealing since it's
supposed to be a direct translation of the D version of checkedint.)


If you only need debugging symbols, then don't pass DEBUG=1. Just add 
the "-g -g3" to the command line in the makefile.


--
/Jacob Carlborg


Re: C++/D interface: exceptions

2014-09-11 Thread Iain Buclaw via Digitalmars-d
On 12 Sep 2014 01:35, "Andrei Alexandrescu via Digitalmars-d" <
digitalmars-d@puremagic.com> wrote:
>
> Hello,
>
>
> We are racking our brains to figure out what to do about exceptions
thrown from C++ functions into D code that calls them.
>
> A few levels of Nirvana would go like this:
>
> 0. Undefined behavior - the only advantage to this is we're there already
with no work :o).
>
> 1. C++ exceptions may be caught only by C++ code on the call stack; D
code does the stack unwinding appropriately (dtors, scope statements) but
can't catch stuff.
>

Libunwind + handling foreign exceptions you can do this.  And is beneficial
in that it works with any other languages that use libunwind, such as gccgo.

Iain


Re: C++ interop - what to do about long and unsigned long?

2014-09-11 Thread Manu via Digitalmars-d
So, can we talk about virtual by default again?
Daniel Murphy was behind it wrt c++ compatibility.
It's still driving me insane. All things I said will happen do happen,
constantly.
On 12 Sep 2014 09:25, "Walter Bright via Digitalmars-d" <
digitalmars-d@puremagic.com> wrote:

> On 9/11/2014 8:39 AM, Sean Kelly wrote:
>
>> Is C++ interop really that important or is it another one of those "if D
>> had
>> this, *then* I would use it!" dismissals.  C interop is clearly crucial.
>> Operating system interfaces are written in C, and not being able to call C
>> functions is hugely limiting.  But C++?  I honestly can't envision a
>> situation
>> where I would actually care about C++ interop.  Is this truly a blocker
>> for some
>> people?  Like an actual, honest blocker and not just a false flag?
>>
>
> C++ was adopted because one could gradually ease into it from C. This will
> never be true for C++ => D, but many people have reported it was nearly
> impossible to transition to D for them because they had engines, libraries,
> whatever, in C++ and it was just not reasonable to wrap them with a C
> interface. So they just stayed with C++.
>
> Considering that some of them spent some significant effort trying to do
> it suggests it is an honest blocker (and I've seen plenty of false flags).
>
> Interestingly, D's "competitor" languages do not offer any migration path
> from C++, and some are even poor at hooking up with C code. Having a better
> story with D offers us potentially a huge advantage.
>


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread deadalnix via Digitalmars-d
On Thursday, 11 September 2014 at 20:55:43 UTC, Andrey Lifanov 
wrote:
Everyone tells about greatness and safety of GC, and that it is 
hard to live without it... But, I suppose, you all do know the 
one programming language in which 95% of AAA-quality popular 
desktop software and OS is written. And this language is C/C++.


How do you explain this? Just because we are stubborn and silly 
people, we use terrible old C++? No. The real answer: there is 
no alternative.


Stop telling fairy tales that there is not possible to program 
safe in C++. Every experienced programmer can easily handle 
parallel programming and memory management in C++. Yes, it 
requires certain work and knowledge, but it is possible, and 
many of us do it on the everyday basis (on my current work we 
use reference counting, though the overall quality of code is 
terrible, I must admit).


You mean safe like openssl, gnutls or apple's one ?


@nogc and exceptions

2014-09-11 Thread Jakob Ovrum via Digitalmars-d
There is one massive blocker for `@nogc` adoption in D library 
code: allocation of exception objects. The GC heap is an ideal 
location for exception objects, but `@nogc` has to stick to its 
promise, so an alternative method of memory management is 
desirable if we want the standard library to be widely usable in 
`@nogc` user code, as well as enabling third-party libraries to 
apply `@nogc`. If we don't solve this, we'll stratify D code into 
two separate camps, the GC-using camp and the `@nogc`-using camp, 
each with their own set of library code.


I can think of a couple of ways to go:

1) The most widely discussed path is to allocate exception 
instances statically, either in global memory or TLS. Currently, 
this has a few serious problems:


  1a) If the exception is chained, that is, if the same exception 
appears twice in the same exception chain - which can easily 
happen when an exception is thrown from a `scope(exit|failure)` 
statement or from a destructor - the chaining mechanism will 
construct a self-referencing list that results in an infinite 
loop when the chain is walked, such as by the global exception 
handler that prints the chain to stderr. This can easily be 
demonstrated with the below snippet:


---
void main()
{
static immutable ex = new Exception("");
scope(exit) throw ex;
throw ex;
}
---

Amending the chaining mechanism to simply *disallow* these chains 
would neuter exception chaining severely, in fact making it more 
or less useless: it's not realistically possible to predict which 
exceptions will appear twice when calling code from multiple 
libraries.


  1b) Exceptions constructed at compile-time which are then later 
referenced at runtime (as in the above snippet) must be immutable 
(the compiler enforces this), as this feature only supports 
allocation in global memory, not in TLS. This brings us to an 
unsolved bug in the exception mechanism - the ability to get a 
mutable reference to an immutable exception without using a cast:


---
void main()
{
static immutable ex = new Exception("");
try throw ex;
catch(Exception e) // `e` is a mutable reference
{
// The exception is caught and `e` aliases `ex`
}
}
---

Fixing this would likely involve requiring 
`catch(const(Exception) e)` at the catch-site, which would 
require users to update all their exception-handling code, and if 
they don't, the program will happily compile but the catch-site 
no longer matches. This is especially egregious as error-handling 
code is often the least tested part of the program. Essentially 
D's entire exception mechanism is not const-correct.


  1c) Enhancing the compiler to allow statically constructing in 
TLS, or allocating space in TLS first then constructing the 
exception lazily at runtime, would allow us to keep throwing 
mutable exceptions, but would seriously bloat the TLS section. We 
can of course allocate shared instances in global memory and 
throw those, but this requires thread-safe code at the catch-site 
which has similar problems to catching by const.


2) The above really shows how beneficial dynamic memory 
allocation is for exceptions. A possibility would be to allocate 
exceptions on a non-GC heap, like the C heap (malloc) or a 
thread-local heap. Of course, without further amendments the onus 
is then on the catch-site to explicitly manage memory, which 
would silently break virtually all exception-handling code really 
badly.


However, if we assume that most catch-sites *don't* escape 
references to exceptions from the caught chain, we could 
gracefully work around this with minimal and benevolent breakage: 
amend the compiler to implicitly insert a cleanup call at the end 
of each catch-block. The cleanup function would destroy and free 
the whole chain, but only if a flag indicates that the exception 
was allocated with this standard heap mechanism. Chains of 
exceptions with mixed allocation origin would have to be dealt 
with in some manner. If inside the catch-block, the chain is 
rethrown or sent in flight by a further exception, the cleanup 
call would simply not be reached and deferred to the next 
catch-site, and so on.


Escaping references to caught exceptions would be undefined 
behaviour. To statically enforce this doesn't happen, exception 
references declared in catch-blocks could be made implicitly 
`scope`. This depends on `scope` actually working reasonably 
well. This would be the only breaking change for user code, and 
the fix is simply making a copy of the escaped exception.


Anyway, I'm wondering what thoughts you guys have on this nascent 
but vitally important issue. What do we do about this?


Re: C++/D interface: exceptions

2014-09-11 Thread Sean Kelly via Digitalmars-d
On Friday, 12 September 2014 at 00:34:52 UTC, Andrei Alexandrescu 
wrote:


We are racking our brains to figure out what to do about 
exceptions thrown from C++ functions into D code that calls 
them.


Allowing a C++ exception to propagate through D code is 
definitely possible.  Catching a C++ exception would require some 
knowledge of how that particular C++ runtime throws exceptions 
though.  Like I can see registering an exception handler for a 
C++ exception maybe somewhat similar to SEH, then prettying it up 
with syntactic sugar.  It seems tricky but doable, at least at a 
glance.


Re: C++/D interface: exceptions

2014-09-11 Thread deadalnix via Digitalmars-d
On Friday, 12 September 2014 at 00:44:10 UTC, David Nadlinger 
wrote:
3) is also doable, but of course significantly more annoying 
because you need to deal with the internals of the exception 
ABI of your C++ compiler. Accessing the exception object is 
relatively trivial, ABI and mangling support is slowly coming 
anyway, but OTOH handling the exception lifetime correctly 
could become somewhat of a headache.




Yes, that is pretty why I limited myself to the "unwind properly 
but do not catch" option. This one would require to mess with the 
innards of various C++ runtime.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Sean Kelly via Digitalmars-d
On Thursday, 11 September 2014 at 20:55:43 UTC, Andrey Lifanov 
wrote:
Every experienced programmer can easily handle parallel 
programming and memory management in C++.


Eliminate "parallel programming" from that statement and I could 
be convinced to believe you, though after years of diagnosing 
bugs that almost invariably tied back to dangling pointer issues, 
even that would be a hard sell.  But even for programmers who 
really have this stuff down... how much of your code and your 
mental energy with C++ is spent on memory ownership rules?  Is it 
really a productive use of your time?  Does the program 
materially benefit from the design required to make it safe, 
correct, and self-documenting with respect to memory ownership 
and data lifetime?  Are smart pointers really that pleasant to 
work with?


Re: C++/D interface: exceptions

2014-09-11 Thread deadalnix via Digitalmars-d
On Friday, 12 September 2014 at 00:34:52 UTC, Andrei Alexandrescu 
wrote:

Hello,


We are racking our brains to figure out what to do about 
exceptions thrown from C++ functions into D code that calls 
them.


A few levels of Nirvana would go like this:

0. Undefined behavior - the only advantage to this is we're 
there already with no work :o).


1. C++ exceptions may be caught only by C++ code on the call 
stack; D code does the stack unwinding appropriately (dtors, 
scope statements) but can't catch stuff.




This is what SDC does.

2. D code can catch exceptions from C++ (e.g. via a 
CppException wrapper class) and give some info on them, e.g. 
the what() string if any.




This would require that druntime to be dependent on C++ runtime.


Re: Better Debugging With Metaprogramming

2014-09-11 Thread Kyle Siefring via Digitalmars-d

On Friday, 12 September 2014 at 01:19:40 UTC, Kyle Siefring wrote:
On Friday, 12 September 2014 at 01:17:10 UTC, Kyle Siefring 
wrote:
I was thinking that there were a few useful applications for 
using metaprogramming could be used as


int rng()
{

}


I think pressing the tab key just sent this message before I 
was sure I even wanted to type it. Let's see if the tab key is 
the verdict Okay the tab key selected the send element and 
then I must have pressed enter. Let's try that...


Not really metaprogramming I suppose. I think I know how I made 
myself think that and explaining it is more than I can write.


I'm sure somebody is going to be curious to what I was going to 
write so I suppose I have to write it anyway. Will what I write 
be coherent? Probably not.


I want this. With sensible syntax.

int rng (int n)
dbg { // define what can be put into the value of the keyvalue 
store of the debug database

  int notSoRandomNumber;
}
body {
  dbg // Checks if the database is empty then if there is an 
entry for the callee of this method

  {
// return a number determined outside of compile time and 
provided by the user

return dbg.notSoRandomNumber;
  }
  else
  {
// do things normally
return (a random number 0 through n);
  }
}

Callees automatically pass on who they are. You can select 
callees using some method (ide, gdb like breakpoint list, ect.) 
and then add entries.
You would be doing a search for where line and file equal what 
you select.


usage of the above function

string[] choices = ["dead", "stone dead", "definitely deceased" 
"bleeding demised", "not pining", "passed on", "no more", "ceased 
to be", "expired and gone to meet his maker", "a stiff", "Bereft 
of life", "rests in peace", "ect.", "EX-PARROT"];

writeln(choices[rng(choices.length)]);

You could write this function and not have to change the code 
that calls it!


Another fun idea. dbg variables can be searched in the database.

for(dbg int i = 0; i < 5; i++)
{
  string[] choices = ["dead", "stone dead", "definitely deceased" 
"bleeding demised", "not pining", "passed on", "no more", "ceased 
to be", "expired and gone to meet his maker", "a stiff", "Bereft 
of life", "rests in peace", "ect.", "EX-PARROT"];

  writeln(choices[rng(choices.length)]);
}

You could decide to only interfer when i = 3 for example.

This idea may be stupid but that cursed tab and enter thing 
forced me to write this regardless of if it's any good.


Better Debugging With Metaprogramming

2014-09-11 Thread Kyle Siefring via Digitalmars-d
I was thinking that there were a few useful applications for 
using metaprogramming could be used as


int rng()
{

}


Re: Better Debugging With Metaprogramming

2014-09-11 Thread Kyle Siefring via Digitalmars-d

On Friday, 12 September 2014 at 01:17:10 UTC, Kyle Siefring wrote:
I was thinking that there were a few useful applications for 
using metaprogramming could be used as


int rng()
{

}


I think pressing the tab key just sent this message before I was 
sure I even wanted to type it. Let's see if the tab key is the 
verdict Okay the tab key selected the send element and then I 
must have pressed enter. Let's try that...


Re: C++/D interface: exceptions

2014-09-11 Thread David Nadlinger via Digitalmars-d
On Friday, 12 September 2014 at 00:44:10 UTC, David Nadlinger 
wrote:

2) […] 3)


Aaand of course I missed the fact that your list in fact started 
at 0.


Re: C++/D interface: exceptions

2014-09-11 Thread David Nadlinger via Digitalmars-d
On Friday, 12 September 2014 at 00:34:52 UTC, Andrei Alexandrescu 
wrote:
Making any progress on this is likely to be hard work, so any 
idea that structures and simplifies the design space would be 
welcome.


Just a quick comment on this: 2) is very simple to implement for 
all the compilers that actually use libunwind Dwarf 2 EH on 
Linux, i.e. GDC and LDC (I think deadalnix already looked into 
this for SDC).


3) is also doable, but of course significantly more annoying 
because you need to deal with the internals of the exception ABI 
of your C++ compiler. Accessing the exception object is 
relatively trivial, ABI and mangling support is slowly coming 
anyway, but OTOH handling the exception lifetime correctly could 
become somewhat of a headache.


David


C++/D interface: exceptions

2014-09-11 Thread Andrei Alexandrescu via Digitalmars-d

Hello,


We are racking our brains to figure out what to do about exceptions 
thrown from C++ functions into D code that calls them.


A few levels of Nirvana would go like this:

0. Undefined behavior - the only advantage to this is we're there 
already with no work :o).


1. C++ exceptions may be caught only by C++ code on the call stack; D 
code does the stack unwinding appropriately (dtors, scope statements) 
but can't catch stuff.


2. D code can catch exceptions from C++ (e.g. via a CppException wrapper 
class) and give some info on them, e.g. the what() string if any.


Making any progress on this is likely to be hard work, so any idea that 
structures and simplifies the design space would be welcome.



Andrei


Re: alias two froms

2014-09-11 Thread Brian Schott via Digitalmars-d

On Thursday, 11 September 2014 at 22:54:49 UTC, Mike wrote:
I saw your code to automate a fix in support of DIP65, but I 
haven't seen any code to automate updating the alias syntax.  
If you have this on hand would be willing to share it?


Mike


dfix will remain unwritten until Walter is willing to make the 
decisions that would require it. I have several other things to 
work on in my spare time, and those other things will actually be 
used.


In the meantime you can use D-Scanner to find instances of old 
alias declarations (and other things) in your code.


Re: C++ interop - what to do about long and unsigned long?

2014-09-11 Thread Walter Bright via Digitalmars-d

On 9/11/2014 8:39 AM, Sean Kelly wrote:

Is C++ interop really that important or is it another one of those "if D had
this, *then* I would use it!" dismissals.  C interop is clearly crucial.
Operating system interfaces are written in C, and not being able to call C
functions is hugely limiting.  But C++?  I honestly can't envision a situation
where I would actually care about C++ interop.  Is this truly a blocker for some
people?  Like an actual, honest blocker and not just a false flag?


C++ was adopted because one could gradually ease into it from C. This will never 
be true for C++ => D, but many people have reported it was nearly impossible to 
transition to D for them because they had engines, libraries, whatever, in C++ 
and it was just not reasonable to wrap them with a C interface. So they just 
stayed with C++.


Considering that some of them spent some significant effort trying to do it 
suggests it is an honest blocker (and I've seen plenty of false flags).


Interestingly, D's "competitor" languages do not offer any migration path from 
C++, and some are even poor at hooking up with C code. Having a better story 
with D offers us potentially a huge advantage.


Re: alias two froms

2014-09-11 Thread Mike via Digitalmars-d
On Thursday, 11 September 2014 at 20:06:17 UTC, Peter Alexander 
wrote:


The problem is that the new syntax is only a few versions old, 
so deprecating the old syntax means breaking all D code that's 
more than a year old.


If D had no existing customers then yeah, we'd remove it, but I 
think it's too early to start deprecation.


If the desire is to deprecate something in the language, it is 
best to start early to prevent the problem from getting worse.


I believe this could be done gradually with little, or no 
disruption.


1. Create a dfix utility to automate updating the alias syntax,
2. Update documentation to fix all examples using the old syntax
3. Update documentation stating that while both syntaxes are 
allowed the latter is preferred
4. Using the dfix utility, update phobos, druntime, and 
potentially other projects in the D ecosystem

5. Give users a few versions to let it sink in
6. Mark old syntax as deprecated so users get a warning
7. Give users a few versions to adjust
8. Finally deprecate old syntax

This recipe could potentially be used to fix many annoyances in 
the language besides the one that started this thread.  But it 
first requires someone (like me) to first create the dfix 
utility.  The existence of such a tool could potentially change 
D's culture as well.  Automating the upgrade of the alias syntax 
would set an important precedent.


Mike


Re: alias two froms

2014-09-11 Thread Ali Çehreli via Digitalmars-d

On 09/11/2014 12:08 PM, eles wrote:

See this:

http://forum.dlang.org/post/kfdkkwikrfvaukhct...@forum.dlang.org

"alias" supports two syntaxes, one of them specifically to address
writing things like alias A this.

That's inconsistent. I agree not the most urgent thing in the world, but
while the fixing things happens (see the @property), why not address
this too?

So? Deprecate the old syntax?


The 'alias this = A' syntax did appear for one release as an 
unintentional feature:



http://forum.dlang.org/thread/aaflopktcjmljxdno...@forum.dlang.org#post-aaflopktcjmljxdnoizj:40forum.dlang.org

Ali



Re: alias two froms

2014-09-11 Thread Mike via Digitalmars-d
On Thursday, 11 September 2014 at 20:14:47 UTC, Brian Schott 
wrote:

On Thursday, 11 September 2014 at 19:08:27 UTC, eles wrote:

So? Deprecate the old syntax?


I tried to get people to agree to deprecate an old syntax that 
was worse. I also wrote an automated upgrade tool that would 
fix people's code.


I couldn't get it approved. As far as I can tell, the official 
position is that parser bugs and language inconsistencies are 
permanent.


I share your frustration.

I saw your code to automate a fix in support of DIP65, but I 
haven't seen any code to automate updating the alias syntax.  If 
you have this on hand would be willing to share it?


Mike


Re: alias two froms

2014-09-11 Thread eles via Digitalmars-d
On Thursday, 11 September 2014 at 20:06:17 UTC, Peter Alexander 
wrote:

On Thursday, 11 September 2014 at 19:08:27 UTC, eles wrote:

See this:



so deprecating the old syntax means breaking all D code that's


Breaking? :-o Why?

IIRC, D uses as first stage a strange "scheduled for deprecation" 
that won't break anything, except will print a warning.


And, when would you plan to start deprecation? Next year? After 
the GC is precised? Why keep things in limbo for so long? I see 
no practical reasons to mark them as "scheduled for deprecation".


Re: C++ interop - what to do about long and unsigned long?

2014-09-11 Thread Paolo Invernizzi via Digitalmars-d
On Thursday, 11 September 2014 at 07:52:01 UTC, Daniel Murphy 
wrote:
"Andrei Alexandrescu"  wrote in message 
news:lurk42$1228$1...@digitalmars.com...


I don't know what OpenCV is but calls to non-virtual C++ 
methods work in master.


And in 2.066 IIRC.


Andrei: OpenCV is probably the most used open-source library for 
computer vision, and it's switching from a C API to a C++ API, so 
that's an interesting topic for my company...


Daniel: Good to know! Where can I find some documentation about 
it?

Actually the online documentation of 2.066 states:

  "Note: non-virtual functions, and static member functions, 
cannot be accessed."


Can you point to the pull request?

--
/Paolo


How to build dmd with debugging symbols in Linux/x86_64?

2014-09-11 Thread H. S. Teoh via Digitalmars-d
The recent introduction of root/checkedint.c has made it impossible to
build DMD with debugging symbols on Linux/x86_64:

https://issues.dlang.org/show_bug.cgi?id=13460

This has made it rather annoying to debug dmd segfaults, since the only
build that works doesn't have any debugging symbols included.

:-(

Any ideas how to work around this breakage? (Short of hacking
root/checkedint.c, which is not particularly appealing since it's
supposed to be a direct translation of the D version of checkedint.)


T

-- 
Dogs have owners ... cats have staff. -- Krista Casada


Re: alias two froms

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 20:06:15 +
Peter Alexander via Digitalmars-d  wrote:

> If D had no existing customers then yeah, we'd remove it, but I 
> think it's too early to start deprecation.
so people can write even more old-styled code...


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 20:55:42 +
Andrey Lifanov via Digitalmars-d  wrote:

> Everyone tells about greatness and safety of GC, and that it is 
> hard to live without it... But, I suppose, you all do know the 
> one programming language in which 95% of AAA-quality popular 
> desktop software and OS is written. And this language is C/C++.
> 
> How do you explain this?
there were times when cool software was written in assembler language.
and the real answer was: "there is no alternative".

stop telling fairy tales that it is easy to program safe in C++. but if
you still want C++, i can give you some links where you can download
free C++ compilers.

why switch to D and throwing out one of it's greatest features? as we
told you, there *IS* the way to avoid GC if you want. did you read that
messages? those about 'scoped' and other things? and about the things
you'll lose if you don't want to use GC? did you noticed that you can
mix GC and manual allocations (with some carefull coding, of course)?

you gave no use cases, yet insisting that GC is bad-bad-bad-bad. we
told you that refcounting is a form of GC, and it's not that
predictable as you believe, but you still talking about "no GC".

please, do you really want to learn or just trolling?

btw, i think that this whole thread belongs to 'D.learn'.


signature.asc
Description: PGP signature


Re: std.experimental.logger: practical observations

2014-09-11 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 21:32:44 +
schrieb "Robert burner Schadek" :

> On Thursday, 11 September 2014 at 16:55:32 UTC, Marco Leise wrote:
> > 2. I noticed that as my logger implementation grew more complex
> >and used functionality from other modules I wrote, that if
> >these used logging as well I'd easily end up in a recursive
> >logging situation.
> >
> >Can recursion checks be added somewhere
> >before .writeLogMsg()?
> 
> I think I don't follow. Just to clear
> 
> foo() {
>  log(); bar();
> }
> 
> bar() {
>  log(); foo();
> }

Let me clarify. Here is some code from 2015:

void main()
{
stdlog = new MyLogger();
// This call may overflow the stack if
// 'somethingBadHappened in someFunc():
error("ERROR!!!");
}

class MyLogger : Logger
{
override void writeLogMsg(ref LogEntry payload)
{
auto bla = someFunc();
useBlaToLog(bla, payload.msg);
}
}

// This is just some helper function unrelated to logging
// but it uses the stdlog functionality from Phobos itself
// as that is good practice in 2015.
auto someFunc()
{
...
if (somethingBadHappened)
{
// Now I must not be used myself in a logger
// implementation, or I overflow the stack!
error("something bad in someFunc");
}
...
}

> > 3. Exceptions and loggin don't mix.
> >Logging functions expect the file and line to be the one
> >where the logging function is placed. When I work with C
> >functions I tend to call them through a template that will
> >check the error return code. See:
> >http://dlang.org/phobos/std_exception.html#.errnoEnforce
> >Such templates pick up file and line numbers from where
> >they are instantiated and pass them on to the exception
> >ctor as runtime values.
> >Now when I use error(), I see no way to pass it runtime
> >file and line variables to make the log file reflect the
> >actual file and line where the error occured, instead of
> >some line in the template or where ever I caught the
> >exception.
> >Not all errors/exceptions are fatal and we might just want
> >to log an exception and continue with execution.
> 
> hm, I think adding template function as requested by dicebot 
> would solve that problem, as it would take line and friends as 
> function parameters

How do you log errors that also throw exceptions ?

-- 
Marco



Re: std.experimental.logger: practical observations

2014-09-11 Thread Robert burner Schadek via Digitalmars-d

On Thursday, 11 September 2014 at 16:55:32 UTC, Marco Leise wrote:

So I've implemented my first logger based on the abstract
logger class, (colorize stderr, convert strings to system
locale for POSIX terminals and wstring on Windows consoles).

1. Yes, logging is slower than stderr.writeln("Hello, world!");
   It is a logging framework with timestamps, runtime
   reconfiguration, formatting etc. One has to accept that. :p


what he said



2. I noticed that as my logger implementation grew more complex
   and used functionality from other modules I wrote, that if
   these used logging as well I'd easily end up in a recursive
   logging situation.

   Can recursion checks be added somewhere
   before .writeLogMsg()?


I think I don't follow. Just to clear

foo() {
log(); bar();
}

bar() {
log(); foo();
}

?



3. Exceptions and loggin don't mix.
   Logging functions expect the file and line to be the one
   where the logging function is placed. When I work with C
   functions I tend to call them through a template that will
   check the error return code. See:
   http://dlang.org/phobos/std_exception.html#.errnoEnforce
   Such templates pick up file and line numbers from where
   they are instantiated and pass them on to the exception
   ctor as runtime values.
   Now when I use error(), I see no way to pass it runtime
   file and line variables to make the log file reflect the
   actual file and line where the error occured, instead of
   some line in the template or where ever I caught the
   exception.
   Not all errors/exceptions are fatal and we might just want
   to log an exception and continue with execution.


hm, I think adding template function as requested by dicebot 
would solve that problem, as it would take line and friends as 
function parameters





Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d
Everyone tells about greatness and safety of GC, and that it is 
hard to live without it... But, I suppose, you all do know the 
one programming language in which 95% of AAA-quality popular 
desktop software and OS is written. And this language is C/C++.


How do you explain this? Just because we are stubborn and silly 
people, we use terrible old C++? No. The real answer: there is no 
alternative.


Stop telling fairy tales that there is not possible to program 
safe in C++. Every experienced programmer can easily handle 
parallel programming and memory management in C++. Yes, it 
requires certain work and knowledge, but it is possible, and many 
of us do it on the everyday basis (on my current work we use 
reference counting, though the overall quality of code is 
terrible, I must admit).


Re: RFC: scope and borrowing

2014-09-11 Thread via Digitalmars-d
On Thursday, 11 September 2014 at 16:32:54 UTC, Ivan Timokhin 
wrote:
I am in no way a language guru, but here are a few things that 
bother me in your proposal. Thought I'd share.


Neither am I :-)



1. AFAIK, all current D type modifiers can be safely removed 
from the topmost level (i.e. it is OK to assign 
immutable(int[]) to immutable(int)[]), because they currently 
apply to particular variable, so there's no good reason to 
impose same restrictions on its copy. Situation seems different 
with scope: it is absolutely not safe to cast away and it 
applies to a *value*, not a variable holding it.


The types in your example are implicitly convertable, indeed no 
explicit cast is necessary. This is because when you copy a const 
value, the result doesn't need to be const. But with scope, it 
makes sense (and is of course necessary) to keep the ownership. I 
don't see that as an inconsistency, but as a consequence of the 
different things const and scope imply: mutability vs. ownership.




This is not only inconsistent, but may also cause trouble with 
interaction with existing features. For example, what should be 
std.traits.Unqual!(scope(int*)) ?


Good question. I would say it needs to keep scope, as it was 
clearly designed with mutability in mind (although it also 
removes shared, which is however related to mutability in a way). 
Ownership is an orthogonal concept to mutability.




2. Consider findSubstring from your examples. What should be 
typeof(findSubstring("", ""))? Is the following code legal?


scope(string) a = ..., b = ...;
...
typeof(findSubstring("", "")) c = findSubstring(a, b);



It's not legal. String literals live forever, so `c` has an owner 
that lives longer than `a` and `b`.


An alternative interpretation would be that the literals are 
temporary expressions; then it would have a very short lifetime, 
thus the assignment would be accepted. But I guess there needs to 
be a rule that says that the specified owners must not live 
shorter than the variable itself.


This is a bit troublesome, because this is how things like 
std.range.ElementType work currently, so they may break. For 
example,
what would be ElementType!ByLineImpl (from the 
"scope(const...)" section)?


I see... it can _not_ be:

scope!(const ByLineImpl!(char, "\n").init)(ByLineImpl!(char, 
"\n"))


because the init value is copied and thus becomes a temporary. 
This is ugly. It would however work if ElementType would take an 
instance instead of a type.




This troubles me the most, because currently return type of a 
function may depend only on types of its arguments, and there 
is a lot of templated code written in that assumption.


I'm sorry, I don't understand what you mean here. This is clearly 
not true, neither for normal functions, nor for templates.


With the current proposal it ALL could break. Maybe there's no 
way around it if we want a solid lifetime management system, 
but I think this is definitely a problem to be aware of.


The answer may be that scope needs to be something independent 
from the type, indeed more like a storage class, rather than what 
I suggested a type modifier. This would also solve the problem 
about `std.traits.Unqual`, no?


I'd have to think this through, but I believe this is indeed the 
way to go. It would make several other things cleaner. On the 
other hand, it would then be impossible to have scoped member 
fields, because storage classes aren't usable there, AFAIK. This 
would need to be supported first.




3. I believe it was mentioned before, but shouldn't scope 
propagate *outwards*? This would not only make perfect sense, 
since the aggregate obviously "holds the reference" just as 
well as its member does, it would also make various 
range-wrappers and alike automatically scope-aware, in that the 
wrapper would automatically become scoped if the wrapped range 
is scoped.


You mean that any aggregate that contains a member with owner X 
automatically gets X as its owner itself?


I don't think so, because assigning a struct is semantically 
equivalent to assigning its members individually one after the 
others (by default) or whatever opAssign() is implemented to do. 
This means that an assignment that violates the rules would fail 
anyway, because the ownership is codified as part of the member's 
type. Instances of wrapper types would also need to be declared 
as scope with the appropriate owner, because otherwise they could 
not contain the scoped variables.


But I'm not sure how this is supposed to work with the storage 
class version of scope.


Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-11 Thread Meta via Digitalmars-d
On Thursday, 11 September 2014 at 20:02:22 UTC, Peter Alexander 
wrote:

On Monday, 8 September 2014 at 15:25:11 UTC, Timon Gehr wrote:
On 09/08/2014 10:51 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
" wrote:


What kind of syntactical sugar do you feel is missing in D?


int square(int x)=>x*x;


Unfortunately we still can't just write:

alias square = x => x * x;

but you can do this:

alias id(alias A) = A;
alias square = id!(x => x * x);


https://github.com/D-Programming-Language/dmd/pull/3638


Re: alias two froms

2014-09-11 Thread Brian Schott via Digitalmars-d

On Thursday, 11 September 2014 at 19:08:27 UTC, eles wrote:

So? Deprecate the old syntax?


I tried to get people to agree to deprecate an old syntax that 
was worse. I also wrote an automated upgrade tool that would fix 
people's code.


I couldn't get it approved. As far as I can tell, the official 
position is that parser bugs and language inconsistencies are 
permanent.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread deadalnix via Digitalmars-d

On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov
wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library. I and many other C/C++ 
programmers prefer to control things manually and use flexible 
allocation schemes that suitable for concrete situations. When 
every nanosecond matters, this is the only option, at least 
nowadays.


So this particular thing stops many of us from using D. When 
you can abandon performance, you usually choose Java (Scala) or 
C# because of their rich support and libraries. And the 
opposite matter is when you need high performance. In this case 
there is almost nothing to choose from. C/C++11 now looks not 
so bad.


And I think of idea of complete extraction of GC from D. For 
this to achieve, I suppose, I have to dig deeply into D 
compiler and also correct/re-implement many things in modules, 
so this ends up with almost new version of D.


I would like to hear your suggestions and some basic 
instructions. Maybe this is not a good idea at all or it will 
be very hard to realize.


Thank you for your attention!


Dear C++ programmer. I know you language do not give a damn about
multicore, but it is a reality in the hardware for more than 10
years now.

As it turns out, outside being a convenience, a GC is capital for
multicore programming. Here are some of the reason:
  - Other memory management technique require bookkeeping. In a
multicore environment, that mean expensive synchronization.
  - It allow, combined with immutability, to get rid of the
concept of ownership. That mean data sharing without any sort of
synchronization once again. This is useful for multicore, but
even on a single core, D's immutable strings + slicing have
proven to be a killer feature for anything that is text
processing like.
  - This is an enabler for lock free data structures. As you don't
need to do memory management manually, your datastructure can
remain valid even with less operation, which generally makes it
easier to make those atomic/lock free.

It has other various benefits:
  - It removes a whole class of bugs (and memory corruption bug
tend to be not the easiest to debug).
  - It remove constraint from the original design. That mean you
can get a prototype working faster, and reduce time to market.
This is key for many companies. Obviously, it still mean that
you'll have to do memory management work if you want to make your
code fast and efficient, but this is now something you can
iterate on while you have a product working.

Now that do not mean GC is the alpha and omega of memory
management, but, as seen, it has great value. Right now, the
implementation is now super good, and it has been made a priority
recently. We also recognize that other technique have value, and
that is why the standard lib propose tool to do reference
counting (and it can do it better than C++ has it knows if
synchronization is necessary). There is important work done to
reduce memory allocation were it is not needed in the standard
lib, and @nogc will allow you to make sure some part of your code
do not rely on the GC.


Re: alias two froms

2014-09-11 Thread Peter Alexander via Digitalmars-d

On Thursday, 11 September 2014 at 19:08:27 UTC, eles wrote:

See this:

http://forum.dlang.org/post/kfdkkwikrfvaukhct...@forum.dlang.org

"alias" supports two syntaxes, one of them specifically to 
address writing things like alias A this.


That's inconsistent. I agree not the most urgent thing in the 
world, but while the fixing things happens (see the @property), 
why not address this too?


So? Deprecate the old syntax?


This was discussed recently.

The problem is that the new syntax is only a few versions old, so 
deprecating the old syntax means breaking all D code that's more 
than a year old.


If D had no existing customers then yeah, we'd remove it, but I 
think it's too early to start deprecation.


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Iain Buclaw via Digitalmars-d
On 11 Sep 2014 18:35, "Joakim via Digitalmars-d" <
digitalmars-d@puremagic.com> wrote:
>
> On Thursday, 11 September 2014 at 15:24:41 UTC, ketmar via Digitalmars-d
wrote:
>>
>> there is no "D committee" (thanks to all existing and inexisting gods
>> for that!), so the only source of trust is compiler developer(s). if
>>
>> compiler developers avoid using new features, those features aren't
>> "blessed", they are dangerous and all that.
>
>
> This is already the case with the pure C++ version of dmd, as David said,
since D is currently not used to write any of the D compilers.
>

*cough* SDC *cough*

In any case, I'll reiterated again. It's not as if the C++ implementation
of D uses any features of C++ that were invented in the last decade...

Iain.


Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-11 Thread Peter Alexander via Digitalmars-d

On Monday, 8 September 2014 at 15:25:11 UTC, Timon Gehr wrote:
On 09/08/2014 10:51 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
" wrote:


What kind of syntactical sugar do you feel is missing in D?


int square(int x)=>x*x;


Unfortunately we still can't just write:

alias square = x => x * x;

but you can do this:

alias id(alias A) = A;
alias square = id!(x => x * x);


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Paulo Pinto via Digitalmars-d

Am 11.09.2014 20:32, schrieb Daniel Alves:

You know, currently I spend most of my time programming in ObjC, but I
really love C, C++ and D.

Since the Clang Compiler, ObjC dropped the GC entirely. Yes, that's
right, no GC at all. And, in fact, it does support concurrent
programming and everything else. 



It is incredible how Objective-C's ARC became a symbol for reference 
counting, instead of the living proof of Apple's failure to produce

a working GC for Objective-C that didn't crash every couple of seconds.

Marketing is great!

--
Paulo



Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-11 Thread Timon Gehr via Digitalmars-d
On 09/11/2014 06:45 PM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
" wrote:

On Thursday, 11 September 2014 at 14:14:38 UTC, Timon Gehr wrote:

...


Which unsound general statement? ...


I was quoting relevant passages.


...

If the community


'the community'?


is trying to undermine the license


I don't even see that happening. What I saw was Daniel voicing a polite 
request based on his perception of the situation and drama immediately 
ensuing without any further adult discussion.



through what might be described as "verbal abuse",


I discourage such behaviour, but the statements made by you and ketmar 
in response to Daniel meet similarly low standards. I suggest not to 
ascribe this incident too much importance.



then the license is put in doubt. I can
then not assume that the next version will be released under the same
license. That makes the source code less attractive. This is what
Dicebot achieves.  The question is, is this what the original authored
wanted? And why should Dicebot have the privilege to undermine the
license? This is a trust issue.
...


If this happened, then you would be the one who authorizes Dicebot to 
have such an effect: by your distrust.



How has your 'freedom' been 'restricted', if at all?


Look up the word "shunning".
...


I encourage you to look it up yourself. Nothing of that sort has taken 
place.



...

I don't know what you are talking about. The license grants your
freedoms.


It grants you certain _rights_. It guarantees you that you won't be sued 
for certain actions that would usually be up to prosecution without 
licensing.




Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Sean Kelly via Digitalmars-d

On Thursday, 11 September 2014 at 19:14:42 UTC, Marc Schütz wrote:
On Thursday, 11 September 2014 at 16:02:31 UTC, Sean Kelly 
wrote:
On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz 
wrote:
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey 
Lifanov wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library.


I can enlighten you ;-) The reason is safety. Past experience 
(especially with C & C++) has shown that manual memory 
management is easy to get wrong. Besides, certain features 
would not easily be possible without it (dynamic arrays, 
closures).


GC is hugely important for concurrent programming as well.  
Many of the more powerful techniques are basically impossible 
without garbage collection.


There is an interesting alternative that the Linux kernel uses,
called RCU (read-copy-update). They have a convention that
references to RCU managed data must not be held (= borrowed by
kernel threads as local pointers) across certain events,
especially context switches. Thus, when a thread modifies an RCU
data structure, say a linked list, and wants to remove an 
element from it, it unlinks it and tells RCU to release the 
element's

memory "later". The RCU infrastructure will then release it once
all processors on the system have gone through a context switch,
at which point there is a guarantee that no thread can hold a
reference to it anymore.


Yes, RCU is one approach I was thinking of.  The mechanism that
detects when to collect the memory is basically a garbage
collector.


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread via Digitalmars-d
On Thursday, 11 September 2014 at 14:08:01 UTC, Iain Buclaw via 
Digitalmars-d wrote:
1) A distribution doesn't ship gdc already (ie: opensuse, 
fedora)


The openSUSE build service supports injection of binary packages 
for bootstrapping purposes, AFAIK. I would be more worried about 
the license (in case the pre-built compiler is DMD, not GDC).


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread via Digitalmars-d

On Thursday, 11 September 2014 at 16:02:31 UTC, Sean Kelly wrote:
On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz 
wrote:
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library.


I can enlighten you ;-) The reason is safety. Past experience 
(especially with C & C++) has shown that manual memory 
management is easy to get wrong. Besides, certain features 
would not easily be possible without it (dynamic arrays, 
closures).


GC is hugely important for concurrent programming as well.  
Many of the more powerful techniques are basically impossible 
without garbage collection.


There is an interesting alternative that the Linux kernel uses,
called RCU (read-copy-update). They have a convention that
references to RCU managed data must not be held (= borrowed by
kernel threads as local pointers) across certain events,
especially context switches. Thus, when a thread modifies an RCU
data structure, say a linked list, and wants to remove an element
from it, it unlinks it and tells RCU to release the element's
memory "later". The RCU infrastructure will then release it once
all processors on the system have gone through a context switch,
at which point there is a guarantee that no thread can hold a
reference to it anymore.

But this is a very specialized solution and requires a lot
discipline, of course.


alias two froms

2014-09-11 Thread eles via Digitalmars-d

See this:

http://forum.dlang.org/post/kfdkkwikrfvaukhct...@forum.dlang.org

"alias" supports two syntaxes, one of them specifically to 
address writing things like alias A this.


That's inconsistent. I agree not the most urgent thing in the 
world, but while the fixing things happens (see the @property), 
why not address this too?


So? Deprecate the old syntax?


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread eles via Digitalmars-d

On Thursday, 11 September 2014 at 16:02:31 UTC, Sean Kelly wrote:
On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz 
wrote:
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:


hidden allocations anywhere.  And it's completely possible with 
Tango to write an application that doesn't allocate at all once 
things are up and running.  With Phobos... not so much.


Hi,

Could you provide one or two short but illustrative examples in 
Tango and Phobos showing the howto and the why not in Phobos?


Will Andrei's allocators improve that with some rewrite of Phobos?

Thanks.


Re: C++ interop - what to do about long and unsigned long?

2014-09-11 Thread Andrei Alexandrescu via Digitalmars-d

On 9/11/14, 8:39 AM, Sean Kelly wrote:

On Thursday, 11 September 2014 at 00:29:37 UTC, Andrei Alexandrescu wrote:

On 9/10/14, 4:16 PM, bachmeier wrote:


Clearly Walter and everyone should work on whatever they think is
important. I hope your statement doesn't imply that all development
effort is going to be put into C++ compatibility.


Ideally it would.


Is C++ interop really that important or is it another one of those "if D
had this, *then* I would use it!" dismissals.


It is that important.


C interop is clearly
crucial.  Operating system interfaces are written in C, and not being
able to call C functions is hugely limiting.  But C++?  I honestly can't
envision a situation where I would actually care about C++ interop.  Is
this truly a blocker for some people?  Like an actual, honest blocker
and not just a false flag?


Blocker. No two ways about it.

We've done some stuff at FB going with the "ah we have C interface so 
we'll just write C wrappers around the C++ code we'll use" and it didn't 
take long to figure that won't scale, like, at all.



Andrei



Re: RFC: scope and borrowing

2014-09-11 Thread Andrei Alexandrescu via Digitalmars-d

On 9/11/14, 7:06 AM, bearophile wrote:

Marc Schütz:


Now that there are again several GC related topics being discussed, I
thought I'd bump this thread.

Would be nice if Walter and/or Andrei could have a look and share
there opinions. Is this something worth pursuing further? Are there
fundamental objections against it?


At the moment the focus seems to be:
1) C++ interoperability
2) GC (in theory).


scope is GC-related so looking at it is appropriate. -- Andrei



Re: C++ interop - what to do about long and unsigned long?

2014-09-11 Thread Andrei Alexandrescu via Digitalmars-d

On 9/11/14, 5:47 AM, bachmeier wrote:

On Thursday, 11 September 2014 at 00:29:37 UTC, Andrei Alexandrescu wrote:

On 9/10/14, 4:16 PM, bachmeier wrote:

Not to go too far off topic, but C++ interoperability is at 0 merit
points until you've got a GC-less standard library, finished shared
library support on all platforms, and tools that rival those available
for C++.


No. C++ interop and GC are related, but only loosely.


In terms of getting C++ developers to use D, this thread that just
appeared is a perfect example of why C++ interop won't help until you
can use the standard library without the GC.

http://forum.dlang.org/post/impaxeaowtuaxtgty...@forum.dlang.org


Of course. That's why I wrote:


C++ interoperability: 1,000,000 merit points
Anything related to the GC: 999,999 merit points
All else: two digits merit points



Andrei



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Joakim via Digitalmars-d
On Thursday, 11 September 2014 at 18:32:10 UTC, Daniel Alves 
wrote:
You know, currently I spend most of my time programming in 
ObjC, but I really love C, C++ and D.


Since the Clang Compiler, ObjC dropped the GC entirely. Yes, 
that's right, no GC at all. And, in fact, it does support 
concurrent programming and everything else. The magic behind it 
is ARC - Automated Reference Counting 
(http://clang.llvm.org/docs/AutomaticReferenceCounting.html): 
the compiler analyzes your code, figures out object scopes and 
sets the correct calls to retain/release/autorelease (for those 
who are not familiar with ObjC, pointers are mostly reference 
counted). So there is no need for a GC and all its 
complications.


In addition to that, Rusty also has an approach like ObjC 
called Region Pointers and objects' Lifetime 
(http://doc.rust-lang.org/guide-pointers.html#boxes). The idea 
is the same, but, depending on the type of the pointer, the 
compiler may add a call for freeing or for decrementing a 
pointer reference counter.


Finally, it looks like there is a language called Cyclone that 
goes the same way (paper here: 
http://www.cs.umd.edu/projects/cyclone/papers/cyclone-regions.pdf)


Since I read Andrei's book, D Programming Language, I've been 
asking myself why D does not go this way...


Anyone knows about a good reason for that?


There have been some long threads about ARC, including this one 
from a couple months ago:


http://forum.dlang.org/thread/mailman.2370.1402931804.2907.digitalmar...@puremagic.com

Walter doesn't think ARC can be done efficiently.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread bachmeier via Digitalmars-d
On Thursday, 11 September 2014 at 18:32:10 UTC, Daniel Alves 
wrote:
You know, currently I spend most of my time programming in 
ObjC, but I really love C, C++ and D.


Since the Clang Compiler, ObjC dropped the GC entirely. Yes, 
that's right, no GC at all. And, in fact, it does support 
concurrent programming and everything else. The magic behind it 
is ARC - Automated Reference Counting 
(http://clang.llvm.org/docs/AutomaticReferenceCounting.html): 
the compiler analyzes your code, figures out object scopes and 
sets the correct calls to retain/release/autorelease (for those 
who are not familiar with ObjC, pointers are mostly reference 
counted). So there is no need for a GC and all its 
complications.


In addition to that, Rusty also has an approach like ObjC 
called Region Pointers and objects' Lifetime 
(http://doc.rust-lang.org/guide-pointers.html#boxes). The idea 
is the same, but, depending on the type of the pointer, the 
compiler may add a call for freeing or for decrementing a 
pointer reference counter.


Finally, it looks like there is a language called Cyclone that 
goes the same way (paper here: 
http://www.cs.umd.edu/projects/cyclone/papers/cyclone-regions.pdf)


Since I read Andrei's book, D Programming Language, I've been 
asking myself why D does not go this way...


Anyone knows about a good reason for that?

On Thursday, 11 September 2014 at 18:04:06 UTC, Andrey Lifanov 
wrote:

Thank you all for replies!

I'm not saying that GC is evil. I just want to have different 
options and more control, when this is required. If D offered 
such choice, many good C++ programmers would have certainly 
considered D as a perfect alternative to C++.


D states that there is no strict and dogmatic rules that it 
follows about programming languages paradigms. And that it is 
a general purpose language. So I think it would be nice to 
have more options of how we can manage memory.


I will continue investigation and certainly inform you if it 
ends with something useful.


Here are a few of the bazillion threads that have discussed the 
topic:

http://forum.dlang.org/thread/ljrm0d$28vf$1...@digitalmars.com?page=1
http://forum.dlang.org/thread/lphnen$1ml7$1...@digitalmars.com?page=1
http://forum.dlang.org/thread/outhxagpohmodjnkz...@forum.dlang.org


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 18:32:09 +
Daniel Alves via Digitalmars-d  wrote:

> compiler analyzes your code, figures out object scopes and sets 
> the correct calls to retain/release/autorelease
this *is* GC. it's just hidden behind compiler magic and can't be
changed without altering the compiler.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 18:04:05 +
Andrey Lifanov via Digitalmars-d  wrote:

> Thank you all for replies!
> 
> I'm not saying that GC is evil. I just want to have different 
> options and more control, when this is required. If D offered 
> such choice
but D *is* offering such choice.


signature.asc
Description: PGP signature


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 17:52:03 +
David Nadlinger via Digitalmars-d  wrote:

> So what exactly is your argument then?
C++ code.
D features.

D code.
D features.

is there *some* difference?


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Daniel Alves via Digitalmars-d
You know, currently I spend most of my time programming in ObjC, 
but I really love C, C++ and D.


Since the Clang Compiler, ObjC dropped the GC entirely. Yes, 
that's right, no GC at all. And, in fact, it does support 
concurrent programming and everything else. The magic behind it 
is ARC - Automated Reference Counting 
(http://clang.llvm.org/docs/AutomaticReferenceCounting.html): the 
compiler analyzes your code, figures out object scopes and sets 
the correct calls to retain/release/autorelease (for those who 
are not familiar with ObjC, pointers are mostly reference 
counted). So there is no need for a GC and all its complications.


In addition to that, Rusty also has an approach like ObjC called 
Region Pointers and objects' Lifetime 
(http://doc.rust-lang.org/guide-pointers.html#boxes). The idea is 
the same, but, depending on the type of the pointer, the compiler 
may add a call for freeing or for decrementing a pointer 
reference counter.


Finally, it looks like there is a language called Cyclone that 
goes the same way (paper here: 
http://www.cs.umd.edu/projects/cyclone/papers/cyclone-regions.pdf)


Since I read Andrei's book, D Programming Language, I've been 
asking myself why D does not go this way...


Anyone knows about a good reason for that?

On Thursday, 11 September 2014 at 18:04:06 UTC, Andrey Lifanov 
wrote:

Thank you all for replies!

I'm not saying that GC is evil. I just want to have different 
options and more control, when this is required. If D offered 
such choice, many good C++ programmers would have certainly 
considered D as a perfect alternative to C++.


D states that there is no strict and dogmatic rules that it 
follows about programming languages paradigms. And that it is a 
general purpose language. So I think it would be nice to have 
more options of how we can manage memory.


I will continue investigation and certainly inform you if it 
ends with something useful.




Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d

Thank you all for replies!

I'm not saying that GC is evil. I just want to have different 
options and more control, when this is required. If D offered 
such choice, many good C++ programmers would have certainly 
considered D as a perfect alternative to C++.


D states that there is no strict and dogmatic rules that it 
follows about programming languages paradigms. And that it is a 
general purpose language. So I think it would be nice to have 
more options of how we can manage memory.


I will continue investigation and certainly inform you if it ends 
with something useful.


Re: RFC: scope and borrowing

2014-09-11 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 13:58:38 +
schrieb "Marc Schütz" :

> PING
> 
> Now that there are again several GC related topics being 
> discussed, I thought I'd bump this thread.
> 
> Would be nice if Walter and/or Andrei could have a look and share 
> there opinions. Is this something worth pursuing further? Are 
> there fundamental objections against it?

I just needed this again for a stack based allocator. It would
make such idioms safer where you return a pointer into an RAII
struct and need to make sure it doesn't outlive the struct.
It got me a nasty overwritten stack. I cannot comment on the
implementation, just that I have long felt it is missing.

-- 
Marco



Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread David Nadlinger via Digitalmars-d
On Thursday, 11 September 2014 at 17:33:14 UTC, ketmar via 
Digitalmars-d wrote:

On Thu, 11 Sep 2014 17:20:51 +
David Nadlinger via Digitalmars-d  
wrote:


Right now, the D compiler (obviously) doesn't use any D 
features at all. I'm not sure how you come to the conclusion 
that being able to use the features from, say, a couple of 
releases ago is worse than that.
i expected this argument. and i don't event want to start 
arguing about

C++ code that isn't using D features.


So what exactly is your argument then?

David


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 17:32:49 +
Joakim via Digitalmars-d  wrote:

> it'd be better to have your D->C++ translator and keep the C++ 
> fallback, but barring anyone willing to actually work on that, 
> it's obviously not going to happen.
i *was* willing to work on it. not anymore.


signature.asc
Description: PGP signature


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Joakim via Digitalmars-d
On Thursday, 11 September 2014 at 15:24:41 UTC, ketmar via 
Digitalmars-d wrote:
there is no "D committee" (thanks to all existing and 
inexisting gods
for that!), so the only source of trust is compiler 
developer(s). if
compiler developers avoid using new features, those features 
aren't

"blessed", they are dangerous and all that.


This is already the case with the pure C++ version of dmd, as 
David said, since D is currently not used to write any of the D 
compilers.


As for your point about requiring a working D compiler installed 
to experiment with building a D compiler from git, it is a valid 
concern, but I don't think it's a big deal to trade raising that 
small barrier to have a partially self-hosting compiler.  Maybe 
it'd be better to have your D->C++ translator and keep the C++ 
fallback, but barring anyone willing to actually work on that, 
it's obviously not going to happen.


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 17:20:51 +
David Nadlinger via Digitalmars-d  wrote:

> Right now, the D compiler (obviously) doesn't use any D features 
> at all. I'm not sure how you come to the conclusion that being 
> able to use the features from, say, a couple of releases ago is 
> worse than that.
i expected this argument. and i don't event want to start arguing about
C++ code that isn't using D features.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Ali Çehreli via Digitalmars-d

On 09/11/2014 09:59 AM, ketmar via Digitalmars-d wrote:

> On Thu, 11 Sep 2014 15:54:17 +
> Andrey Lifanov via Digitalmars-d  wrote:
>
>> What do you mean? Some sort of memory leak?
> no, i meat that predictability is lost there. and with single-linked
> lists, for example, freeing list head can take big amount of time too
> if the list is sufficiently long.

In support of your point, one of Bartosz Milewski's blog posts[1] has a 
link to a paper[2] where he says "There is actual research showing that 
the two approaches are just two sides of the same coin."


Ali

[1] http://bartoszmilewski.com/2013/09/19/edward-chands/

[2] http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf



Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread David Nadlinger via Digitalmars-d
On Thursday, 11 September 2014 at 15:24:41 UTC, ketmar via 
Digitalmars-d wrote:
if compiler developers avoid using new features, those features 
aren't "blessed", they are dangerous and all that.


Wat.

Right now, the D compiler (obviously) doesn't use any D features 
at all. I'm not sure how you come to the conclusion that being 
able to use the features from, say, a couple of releases ago is 
worse than that.


David


Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-11 Thread Joakim via Digitalmars-d
On Thursday, 11 September 2014 at 12:26:39 UTC, Ola Fosheim 
Grøstad wrote:
I am not here to increase my self worth, though I don't mind an 
educated argument or a role playing stunt, I am here to 
increase the probability of having a programming language that 
is better than the alternatives for server programming within a 
few years. With the current situation it will take another 
decade.


Let me begin by noting that I'm glad you're tinkering with D, :) 
as I noted earlier that experimentation is good.


You appear to think that management == control. You come 
through as a control freak, but I could be wrong.


Management is about nurturing talent, smoothing out differences 
and facilitating productivity. It is not primarily about 
control.


Cults usually have problem gaining more than 12-30 members. 
They constrain the freedom of their members too much. Bad idea.


These "totalitarian" or "cult" arguments don't go anywhere 
because it is easy to shrug them off, since the reality is far 
from that extreme.  The core D group can sometimes be insular, 
but I don't think that's really the problem here.


I'll be happy to fork D if it makes it possible for me to work 
on it full time. A license is a contract. You either stand by 
it or renegotiate it. I take the liberties of the liberal Boost 
license literally and will enjoy them in any fashion I fancy.


What you are saying is basically that you disagree with the 
license, so maybe Walter should have spent more time making 
sure that he had backing for it in the community, but that is 
an issue you have to take up with him. Not me or ketmar.


This argument has nothing to do with the Boost license, as 
practically every open source license allows the same forking.  
Looking back at how this blew up, it was actually Daniel who 
asked you not to "fork D's syntax" and then Dicebot merely 
reinforced that, before you both went overboard.


The real issue is that historically any programming language 
didn't want a bunch of incompatible syntax dialects floating 
around, as that makes it difficult for many devs to understand 
what the language proper actually consists of.  That concern 
about "fragmentation" is all Daniel and Dicebot were speaking to.


However, I've noted that is not a reason to frown on syntax 
experimentation like you and ketmar want to do, as your syntax 
tweaking is far from a full-blown or popular dialect yet.  I've 
also noted that there may be a modern solution to such a problem, 
automated syntax translation for different dialects.


Dicebot can't stop you from experimenting with new syntax: in 
fact, he started off by saying what _he_ would do instead, not 
what _you_ should do, in his second post.  Keep tinkering and 
sharing patches and let us know what you find.


std.experimental.logger: practical observations

2014-09-11 Thread Marco Leise via Digitalmars-d
So I've implemented my first logger based on the abstract
logger class, (colorize stderr, convert strings to system
locale for POSIX terminals and wstring on Windows consoles).

1. Yes, logging is slower than stderr.writeln("Hello, world!");
   It is a logging framework with timestamps, runtime
   reconfiguration, formatting etc. One has to accept that. :p

2. I noticed that as my logger implementation grew more complex
   and used functionality from other modules I wrote, that if
   these used logging as well I'd easily end up in a recursive
   logging situation.

   Can recursion checks be added somewhere
   before .writeLogMsg()?

3. Exceptions and loggin don't mix.
   Logging functions expect the file and line to be the one
   where the logging function is placed. When I work with C
   functions I tend to call them through a template that will
   check the error return code. See:
   http://dlang.org/phobos/std_exception.html#.errnoEnforce
   Such templates pick up file and line numbers from where
   they are instantiated and pass them on to the exception
   ctor as runtime values.
   Now when I use error(), I see no way to pass it runtime
   file and line variables to make the log file reflect the
   actual file and line where the error occured, instead of
   some line in the template or where ever I caught the
   exception.
   Not all errors/exceptions are fatal and we might just want
   to log an exception and continue with execution.

-- 
Marco



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 15:54:17 +
Andrey Lifanov via Digitalmars-d  wrote:

> What do you mean? Some sort of memory leak?
no, i meat that predictability is lost there. and with single-linked
lists, for example, freeing list head can take big amount of time too
if the list is sufficiently long.

what D really needs is GC that will not stopping the world on
collecting. you will be able to separate threads that allocating from
threads that not, and non-allocating threads will work without pauses.
then 'worker' threads can use lists of free objects almost without GC
hits.


signature.asc
Description: PGP signature


Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-11 Thread via Digitalmars-d

On Thursday, 11 September 2014 at 14:14:38 UTC, Timon Gehr wrote:
Protip: Stop categorising people in a blurry way and making 
unsound general statements about those categories if you want 
your points to be understood.


Which unsound general statement? If you are talking about my 
response to Dicebot it was "mirroring" his own arguments to make 
him realize where he was going. Basically outlining the 
consequences of his own rhetorics.


AFAICS, the Boost license is just about opting out of possibly 
annoying defaults of copyright law. I see no reason to adopt an 
ideology over this.


I don't understand this statement. I would not touch a code base 
that is not under PD, Boost, BSD or MIT for very pragmatic 
reasons. Those pragmatic reasons is that I don't want my freedom 
to be tied down.


If the community is trying to undermine the license through what 
might be described as "verbal abuse", then the license is put in 
doubt. I can then not assume that the next version will be 
released under the same license. That makes the source code less 
attractive. This is what Dicebot achieves. The question is, is 
this what the original authored wanted? And why should Dicebot 
have the privilege to undermine the license? This is a trust 
issue.



How has your 'freedom' been 'restricted', if at all?


Look up the word "shunning".

(BTW: freedom becomes a non-trivial concept as soon as more 
than one entity should be free.)


I don't know what you are talking about. The license grants your 
freedoms. If a third party try to restrict that freedom using 
threats or verbal abuse then they are doing something wrong. This 
ought to be obvious.


I am still perplexed by the whole "valued member" rhetorical 
element Dicebot uses. He seems to place an unusual emphasis on 
the need to evaluate other people. I'd frankly suggest he deal 
with those issues somewhere else.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Paulo Pinto via Digitalmars-d

Am 11.09.2014 18:02, schrieb Sean Kelly:

On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz wrote:

On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov wrote:

Hello everyone! Being a C/C++ programmer I don't understand, why such
language as D (system programming language) implemented garbage
collector as a core feature, not as additional optional module or
library.


I can enlighten you ;-) The reason is safety. Past experience
(especially with C & C++) has shown that manual memory management is
easy to get wrong. Besides, certain features would not easily be
possible without it (dynamic arrays, closures).


GC is hugely important for concurrent programming as well.  Many of the
more powerful techniques are basically impossible without garbage
collection.

But I think this largely comes down to standard library design. Java,
for example, is a pretty okay language from a syntax perspective.  The
problem with it is more that doing anything with the standard library
requires generating tons of often temporary objects.  In the server
programming realm, an unbelievable amount of effort has been put into
working around this particular problem (look at what the Netty group has
been doing, for example).  So it's not so much that the language
supports garbage collection as that the established programming paradigm
encourages you to lean heavily on it.

By allowing manual memory management, D is far closer to C++. The
problem is that, like Java, many APIs in the standard library are
written in such a way that memory allocations are unavoidable.  However,
it doesn't have to be this way.  An essential design rule for Tango, for
example, was to perform no hidden allocations anywhere.  And it's
completely possible with Tango to write an application that doesn't
allocate at all once things are up and running.  With Phobos... not so
much.

In short, I think that a crucial factor affecting the perception of a
language is its standard library.  It stands as a template for how code
in that language is intended to be written, and is the framework from
which essentially all applications are built. Breaking from this tends
to be difficult to the point of where you're really better off looking
for a different language that suits your needs better.

I think Java is in a weird spot in that it's so deeply entrenched at
this point that many think it's easier to try and force people to change
their programming habits than it is to get them to use a different
language.  Though encouraging a transition to a compatible language with
better fundamentals is probably preferable (Scala?).

C++ is kind of in the same situation, which I guess is why some feel
that C++ interop might be a good thing.  But realistically, working with
a truly hybrid code base only serves to further complicate things when
the motivating goal is simplification. It's typically far preferable to
simply have communicating agents written in different languages that all
talk the same protocol. That C++ app can go into maintenance mode and
the Java, D, and whatever other new stuff just talks to it via a socket
connection and then shivers and washes its hands when done.


It has been acknowledged that it was a mistake not to allow better 
control in Java where to place the data, for the last mile in performance.


This is why the major focus in Java 9+ are value types, control over 
array layouts, a new FFI interface and promotion of unsafe to public API.


http://www.oracle.com/technetwork/java/javase/community/jlssessions-2255337.html

This is a consequence of Java's use in big data and high performance 
trading systems.


D's advantage is that (except for current GC), it offers today what Java 
can only offer in the next revision, or even later if not all features 
happen to be 9 ready.


--
Paulo




Re: RFC: scope and borrowing

2014-09-11 Thread Ivan Timokhin via Digitalmars-d
I am in no way a language guru, but here are a few things that bother me 
in your proposal. Thought I'd share.


1. AFAIK, all current D type modifiers can be safely removed from the 
topmost level (i.e. it is OK to assign immutable(int[]) to 
immutable(int)[]), because they currently apply to particular variable, 
so there's no good reason to impose same restrictions on its copy. 
Situation seems different with scope: it is absolutely not safe to cast 
away and it applies to a *value*, not a variable holding it.


This is not only inconsistent, but may also cause trouble with 
interaction with existing features. For example, what should be 
std.traits.Unqual!(scope(int*)) ?


2. Consider findSubstring from your examples. What should be 
typeof(findSubstring("", ""))? Is the following code legal?


scope(string) a = ..., b = ...;
...
typeof(findSubstring("", "")) c = findSubstring(a, b);

This is a bit troublesome, because this is how things like 
std.range.ElementType work currently, so they may break. For example,

what would be ElementType!ByLineImpl (from the "scope(const...)" section)?

This troubles me the most, because currently return type of a function 
may depend only on types of its arguments, and there is a lot of 
templated code written in that assumption. With the current proposal it 
ALL could break. Maybe there's no way around it if we want a solid 
lifetime management system, but I think this is definitely a problem to 
be aware of.


3. I believe it was mentioned before, but shouldn't scope propagate 
*outwards*? This would not only make perfect sense, since the aggregate 
obviously "holds the reference" just as well as its member does, it 
would also make various range-wrappers and alike automatically 
scope-aware, in that the wrapper would automatically become scoped if 
the wrapped range is scoped.


11.09.2014 15:58, "Marc =?UTF-8?B?U2Now7x0eiI=?= " пишет:

PING

Now that there are again several GC related topics being discussed, I
thought I'd bump this thread.

Would be nice if Walter and/or Andrei could have a look and share there
opinions. Is this something worth pursuing further? Are there
fundamental objections against it?

On Sunday, 24 August 2014 at 13:14:45 UTC, Marc Schütz wrote:

In the "Opportunities for D" thread, Walter again mentioned the topics
ref counting, GC, uniqueness, and borrowing, from which a lively
discussion developed [1]. I took this thread as an opportunity to
write down some ideas about these topics. The result is a rather
extensive proposal for the implementation of borrowing, and its
implementations:

http://wiki.dlang.org/User:Schuetzm/scope

This is not a real DIP, but before I put more work into formalizing
it, I'd like to hear some thoughts from the languages gurus here:

* Is this the general direction we want to go? Is it acceptable in
general?
* Is the proposal internally consistent?
* How big would the effort to implement it be? (I suspect it's a large
amount of work, but relatively straightforward.)

[1] http://forum.dlang.org/thread/lphnen$1ml7$1...@digitalmars.com






Re: [Article] D's Garbage Collector Problem

2014-09-11 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 14:30:05 +
schrieb "Kagamin" :

> >There are various api the compiler use to allocate from the GC. 
> >Some do not specify if the allocated memory contains pointers or 
> >not, and none do specify the type qualifier of the memory.
> 
> Is it true about pointers? Which functions?
> And why type qualifiers matter?

Immutable data structures cannot have pointers changed or set
to null. Also they can only reference other immutable data.
This means that they form sort of a big blob that is kept
alive by one or more pointers to it, but the GC never needs
to check the immutable pointers inside of it.

Shared/unshared may affect implementations that provide thread
local GC. E.g. only shared data needs to be handled by a
global stop the world GC. I'm not sure though.

-- 
Marco



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Sean Kelly via Digitalmars-d

On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz wrote:
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library.


I can enlighten you ;-) The reason is safety. Past experience 
(especially with C & C++) has shown that manual memory 
management is easy to get wrong. Besides, certain features 
would not easily be possible without it (dynamic arrays, 
closures).


GC is hugely important for concurrent programming as well.  Many 
of the more powerful techniques are basically impossible without 
garbage collection.


But I think this largely comes down to standard library design.  
Java, for example, is a pretty okay language from a syntax 
perspective.  The problem with it is more that doing anything 
with the standard library requires generating tons of often 
temporary objects.  In the server programming realm, an 
unbelievable amount of effort has been put into working around 
this particular problem (look at what the Netty group has been 
doing, for example).  So it's not so much that the language 
supports garbage collection as that the established programming 
paradigm encourages you to lean heavily on it.


By allowing manual memory management, D is far closer to C++.  
The problem is that, like Java, many APIs in the standard library 
are written in such a way that memory allocations are 
unavoidable.  However, it doesn't have to be this way.  An 
essential design rule for Tango, for example, was to perform no 
hidden allocations anywhere.  And it's completely possible with 
Tango to write an application that doesn't allocate at all once 
things are up and running.  With Phobos... not so much.


In short, I think that a crucial factor affecting the perception 
of a language is its standard library.  It stands as a template 
for how code in that language is intended to be written, and is 
the framework from which essentially all applications are built.  
Breaking from this tends to be difficult to the point of where 
you're really better off looking for a different language that 
suits your needs better.


I think Java is in a weird spot in that it's so deeply entrenched 
at this point that many think it's easier to try and force people 
to change their programming habits than it is to get them to use 
a different language.  Though encouraging a transition to a 
compatible language with better fundamentals is probably 
preferable (Scala?).


C++ is kind of in the same situation, which I guess is why some 
feel that C++ interop might be a good thing.  But realistically, 
working with a truly hybrid code base only serves to further 
complicate things when the motivating goal is simplification.  
It's typically far preferable to simply have communicating agents 
written in different languages that all talk the same protocol.  
That C++ app can go into maintenance mode and the Java, D, and 
whatever other new stuff just talks to it via a socket connection 
and then shivers and washes its hands when done.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d

On Thursday, 11 September 2014 at 15:39:26 UTC, ketmar via
Digitalmars-d wrote:

On Thu, 11 Sep 2014 15:23:53 +
Andrey Lifanov via Digitalmars-d  
wrote:


is that in case of manual memory management I know completely 
when and what will be freed or allocated (with the help of 
smart pointers/reference counting, of course).
but you don't. you can only estimate, that's all. what if you 
passing
refcounted object to another function which stores it 
somewhere? oops.


What do you mean? Some sort of memory leak? I guess, you can
always write programs, no matter with or without GC, that will
steal or hide stuff. As usual countermeasures, you just have to
carefully plan single storage/multiple users code.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Paulo Pinto via Digitalmars-d

Am 11.09.2014 14:38, schrieb Andrey Lifanov:

Hello everyone! Being a C/C++ programmer I don't understand, why such
language as D (system programming language) implemented garbage
collector as a core feature, not as additional optional module or
library. I and many other C/C++ programmers prefer to control things
manually and use flexible allocation schemes that suitable for concrete
situations. When every nanosecond matters, this is the only option, at
least nowadays.

...


Since the mid-70's there are system programming languages with GC.

Namely Algol 68(IFIP), Mesa/Cedar (Xerox), Modula-3 (Olivetti), Oberon 
(ETHZ) and a few others.


They just happened to be married to OSes that weren't as successful as 
UNIX jumping out of the research labs into the industry.


It is about time systems programming catches up with the 70's 
innovations outside of the PDP-11 world.


--
Paulo


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d
I have recently found: 
http://en.wikibooks.org/wiki/D_Programming/Garbage_collector/Thoughts_about_better_GC_implementations


Good stuff there.


Re: C++ interop - what to do about long and unsigned long?

2014-09-11 Thread Sean Kelly via Digitalmars-d
On Thursday, 11 September 2014 at 00:29:37 UTC, Andrei 
Alexandrescu wrote:

On 9/10/14, 4:16 PM, bachmeier wrote:

Clearly Walter and everyone should work on whatever they think 
is important. I hope your statement doesn't imply that all 
development effort is going to be put into C++ compatibility.


Ideally it would.


Is C++ interop really that important or is it another one of 
those "if D had this, *then* I would use it!" dismissals.  C 
interop is clearly crucial.  Operating system interfaces are 
written in C, and not being able to call C functions is hugely 
limiting.  But C++?  I honestly can't envision a situation where 
I would actually care about C++ interop.  Is this truly a blocker 
for some people?  Like an actual, honest blocker and not just a 
false flag?


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 15:23:53 +
Andrey Lifanov via Digitalmars-d  wrote:

> is that in case of manual memory management I know completely 
> when and what will be freed or allocated (with the help of smart 
> pointers/reference counting, of course).
but you don't. you can only estimate, that's all. what if you passing
refcounted object to another function which stores it somewhere? oops.

and in D you are free to use structs, `scoped!`, and malloc()/free()
(see std.typecons for "scoped" template source to see how it's done).

you can write your own array implementations too. slices are hard to do
without proper GC though, and closures requres GC, AFAIK.

> So maybe instead of getting rid of GC I will consider the 
> implementation of optimized moving GC.
copying GC requires support from the compiler side, and it is not that
easy at all (imagine malloc()ed blocks which holds references to
GC-alloced objects, for example). current D GC is conservative, so you
can just register malloc()ed block as root, but for copying GC you'll
need either to inform GC about exact block structure, or provide your
own scan/copy/fix callbacks.

and if copying GC will not do 'stop-the-world', some threads can hold
pointers in registers... and inserting read/write barriers will hurt
performance...


signature.asc
Description: PGP signature


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 15:38:21 +0100
Iain Buclaw via Digitalmars-d  wrote:

> That's not a very accurate view at all.  GCC and Clang are written in
> a very idiomatic C++, but does that make people think C++x14 features
> are dangerous and untested?
being "blessed by committee" has it's advantages. ;-)

there is no "D committee" (thanks to all existing and inexisting gods
for that!), so the only source of trust is compiler developer(s). if
compiler developers avoid using new features, those features aren't
"blessed", they are dangerous and all that.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d

Thank you for quick response!

I guess I need further investigation and write good tests to 
compare C++ and D solutions. The main advantage over GC-language 
is that in case of manual memory management I know completely 
when and what will be freed or allocated (with the help of smart 
pointers/reference counting, of course). You can say that this 
has no importance to programmer, but it has, because you don't 
have performance spikes and don't have to waste the processor and 
slow memory time for scanning what things need to be collected. 
So, the big advantage (with the price of greater responsibility) 
is much greater predictability of how your program will perform.


The main problem of heap-intensive programs with huge amount of 
objects is heap fragmentation. During the program work there can 
be "holes" of memory chunks which complicate further allocations, 
specially for big continuous arrays. Also it ruins cache 
performance, because similar objects, belonging to one array, can 
be stationed far from each other, divided by such "holes". Fairly 
speaking, C/C++ do not have the built-in solution for such 
problem, but you can program it manually there.


So maybe instead of getting rid of GC I will consider the 
implementation of optimized moving GC.


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Iain Buclaw via Digitalmars-d
On 11 September 2014 15:31, ketmar via Digitalmars-d
 wrote:
> On Thu, 11 Sep 2014 15:07:48 +0100
> Iain Buclaw via Digitalmars-d  wrote:
>
>> Just because the compiler is not implemented with shiny new features,
>> does not stop progress of implementing shiny new features.
> but using new features by compiler authors themselves will allow faster
> adoption and better testing. and why don't use D in all it's glory for
> compiler code? some people will look at compiler code to find
> "idiomatic" way of doing things. and if compiler writers themselves
> avoid new features... well, that means that those new features are
> dangerous and untested and better be avoided.

That's not a very accurate view at all.  GCC and Clang are written in
a very idiomatic C++, but does that make people think C++x14 features
are dangerous and untested?

Iain.


Re: [Article] D's Garbage Collector Problem

2014-09-11 Thread Kagamin via Digitalmars-d
There are various api the compiler use to allocate from the GC. 
Some do not specify if the allocated memory contains pointers or 
not, and none do specify the type qualifier of the memory.


Is it true about pointers? Which functions?
And why type qualifiers matter?


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 15:07:48 +0100
Iain Buclaw via Digitalmars-d  wrote:

> Just because the compiler is not implemented with shiny new features,
> does not stop progress of implementing shiny new features.
but using new features by compiler authors themselves will allow faster
adoption and better testing. and why don't use D in all it's glory for
compiler code? some people will look at compiler code to find
"idiomatic" way of doing things. and if compiler writers themselves
avoid new features... well, that means that those new features are
dangerous and untested and better be avoided.


signature.asc
Description: PGP signature


Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-11 Thread Timon Gehr via Digitalmars-d
On 09/11/2014 01:46 PM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
" wrote:

The D community is as
diverse as the language and even if three people yell in the
same tone, it doesn't mean everyone else believes the same.


I know that, but newbies don't know that. [...]Exchanging mods with other 
newbies (which I am in a way)


?

Protip: Stop categorising people in a blurry way and making unsound 
general statements about those categories if you want your points to be 
understood.




A bunch of unwritten rules tend to lead to unpleasant situations. It is
important for a development community to align their attitudes to the
freedoms implied by the license.


AFAICS, the Boost license is just about opting out of possibly annoying 
defaults of copyright law. I see no reason to adopt an ideology over this.



A Boost license comes with a set of
freedoms that I would expect the community to back fully.


I.e. you expect 'the community' to hold restricted opinions?

Also: Where does the Boost licence say anything about discussing 
arbitrary derivative works on the official forums?



There is no good reason for having forum members adhere to a separate set of 
rules
where they have their freedom restricted.


How has your 'freedom' been 'restricted', if at all?

(BTW: freedom becomes a non-trivial concept as soon as more than one 
entity should be free.)




Re: RFC: scope and borrowing

2014-09-11 Thread bearophile via Digitalmars-d

Marc Schütz:

Now that there are again several GC related topics being 
discussed, I thought I'd bump this thread.


Would be nice if Walter and/or Andrei could have a look and 
share there opinions. Is this something worth pursuing further? 
Are there fundamental objections against it?


At the moment the focus seems to be:
1) C++ interoperability
2) GC (in theory).

Bye,
bearophile


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Iain Buclaw via Digitalmars-d
On 11 September 2014 11:13, ketmar via Digitalmars-d
 wrote:
> On Thu, 11 Sep 2014 11:54:08 +0200
> Daniel Kozak via Digitalmars-d  wrote:
>
>> What? I don't see any problem with binary blob. With gcc it is same I
>> need binary blob to be able to compile gcc from source. And if I am
>> really scary of binary dmd compiler I can still use last C++ version
>> and compile it with gcc, then use this product to compile next ddmd
>> and so on.
> as i said -- good luck with it. D is not GCC (yet?), and GDC is not a
> part of GCC. it's very naive to assume that FOSS programmer that wants
> to try D will take last C++ version, then compiles it, than compiles
> next D version and so on. he will take either gdc from distro repo (and
> this will be old, if not ancient) just to find that it has no shiny new
> features the programmer just read about in NG, or will try to build
> HEAD and... and drop D, 'cause "if they make it so hard to build their
> compiler, they can play with it without me".
>

Two ways of looking at it, this is a problem if:

1) A distribution doesn't ship gdc already (ie: opensuse, fedora)
2) A developer building gdc doesn't have a D compiler available.

As for binary blob, well, gcc had to start from somewhere, and was
originally too built by a closed source binary blob.

If you are concerned, enable bootstrapping, that will give you an
ethically clean compiler.

> inability to be built with GCC out-of-the-box pushing D into
> marginality. and inability to use new shiny compiler features 'cause
> compiler should be buildable with previous versions too. and this will
> effectively kill language progress:
>

Just because the compiler is not implemented with shiny new features,
does not stop progress of implementing shiny new features.

Iain.


Re: RFC: scope and borrowing

2014-09-11 Thread via Digitalmars-d

PING

Now that there are again several GC related topics being 
discussed, I thought I'd bump this thread.


Would be nice if Walter and/or Andrei could have a look and share 
there opinions. Is this something worth pursuing further? Are 
there fundamental objections against it?


On Sunday, 24 August 2014 at 13:14:45 UTC, Marc Schütz wrote:
In the "Opportunities for D" thread, Walter again mentioned the 
topics ref counting, GC, uniqueness, and borrowing, from which 
a lively discussion developed [1]. I took this thread as an 
opportunity to write down some ideas about these topics. The 
result is a rather extensive proposal for the implementation of 
borrowing, and its implementations:


http://wiki.dlang.org/User:Schuetzm/scope

This is not a real DIP, but before I put more work into 
formalizing it, I'd like to hear some thoughts from the 
languages gurus here:


* Is this the general direction we want to go? Is it acceptable 
in general?

* Is the proposal internally consistent?
* How big would the effort to implement it be? (I suspect it's 
a large amount of work, but relatively straightforward.)


[1] http://forum.dlang.org/thread/lphnen$1ml7$1...@digitalmars.com




Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 07:12:49 +0100
schrieb Iain Buclaw via Digitalmars-d
:

> By way of example, the version of D shipped with gcc-4.9 in
> Debian/Ubuntu is 2.065, if we were to switch now, then that compiler
> version will need to be able to build whatever will be the current
> when gcc-5.0 comes out.
> 
> Iain.

For Gentoo I used the third version component as well. I found
it better matches the D release cycle:

4.8.1 => 2.063
4.8.2 => 2.064
4.8.3 => 2.065

-- 
Marco



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Adam D. Ruppe via Digitalmars-d
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:

And I think of idea of complete extraction of GC from D.


You could also recompile the runtime library without the GC. 
Heck, with the new @nogc on your main, the compiler (rather than 
the linker) should even give you nicish error messages if you try 
to use it, but I've done it before that was an option.


Generally though, GC fear is overblown. Use it in most places and 
just don't use it where it makes things worse.


Re: [Article] D's Garbage Collector Problem

2014-09-11 Thread Chris via Digitalmars-d
On Thursday, 11 September 2014 at 11:28:38 UTC, Rikki Cattermole 
wrote:


I see hmm, I may compile a custom lua dll then with lanes 
(think threading). Well maybe, we shall see.
Might also be a good idea to hook it up to my skeleton tool[0] 
to enable using e.g. gists for require's.


[0] https://github.com/rikkimax/skeleton


Maybe I can set up a simple test version for you, when I find the 
time. Can't promise but I'll try. One thing I haven't looked into 
yet is a clever way of handling the output in stdout. At the 
moment, I write to a temporary file ("test.txt") I read in again.




Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread via Digitalmars-d
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library.


I can enlighten you ;-) The reason is safety. Past experience 
(especially with C & C++) has shown that manual memory management 
is easy to get wrong. Besides, certain features would not easily 
be possible without it (dynamic arrays, closures).


I and many other C/C++ programmers prefer to control things 
manually and use flexible allocation schemes that suitable for 
concrete situations. When every nanosecond matters, this is the 
only option, at least nowadays.


So this particular thing stops many of us from using D. When 
you can abandon performance, you usually choose Java (Scala) or 
C# because of their rich support and libraries. And the 
opposite matter is when you need high performance. In this case 
there is almost nothing to choose from. C/C++11 now looks not 
so bad.


And I think of idea of complete extraction of GC from D. For 
this to achieve, I suppose, I have to dig deeply into D 
compiler and also correct/re-implement many things in modules, 
so this ends up with almost new version of D.


I would like to hear your suggestions and some basic 
instructions. Maybe this is not a good idea at all or it will 
be very hard to realize.


I don't think it is necessary to remove the GC completely. It can 
only interfere with your program in three situations:


1) When you allocate and run out of memory, the GC will first try 
to release some unneeded memory before requesting more from the 
OS. If you don't allocate, anything in a performance critical 
section of your program, the GC will never run.
2) (Actually a consequence of 1) When the GC runs, it stops all 
threads, including those that never allocate.

3) When you call it manually.

For 1), there is the @nogc attribute. Any function marked with 
this attribute is guaranteed to never allocate on the GC heap, 
including via any other functions it calls. If you write `void 
main() @nogc`, your entire program will be GC-less. This 
attribute has only been introduced in the latest release, and 
some parts of the standard library cannot be used with it yet.


For 2), you can call `GC.disable()` and `GC.enable()` to switch 
the GC on/off temporarily. You can still allocate memory, but the 
GC will not run.


For 3): Do it when you can afford the latency, for example 
between frames, or when you are not in a performance critical 
section of your program. Right now, it is not anytime capable, 
which means you cannot give it a deadline until that it either 
has to finish, or abort the current operation. This would be an 
interesting enhancement.


As for manual memory management, Andrei is currently working on 
an allocator library: 
http://erdani.com/d/phobos-prerelease/std_allocator.html


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Daniel Murphy via Digitalmars-d


"Iain Buclaw via Digitalmars-d"  wrote in 
message news:mailman.722.1410407468.5783.digitalmar...@puremagic.com...


For GDC (and distributions that ship GDC), that would extend to 3 or 4 
versions,

as gcc releases are a round about, or just over yearly.


I can live with yearly, although I wouldn't mind if GDC released more often. 



Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 13:00:37 +
Mathias LANG via Digitalmars-d  wrote:

> So you are asking someone to do something you have no need of,
> just because you think it's a good idea ?
no, i don't want to work for trashcan. i have enough things to do for
fun. and it's clearly written: "The DDMD project is nearly complete,
and this is not going to happen as a part of it."

so thank you, find another trashcan-filler.

> that's not the community problem
i got it. "we need contributors, but that's not the community problem!"
where's my money than? no money, and "not the community problem"? ok,
working on D is not fun anymore.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Kagamin via Digitalmars-d
You can also help with allocators design: 
http://forum.dlang.org/thread/lji5db$30j3$1...@digitalmars.com


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Kagamin via Digitalmars-d
The current idea is to add @nogc attribute to functions in 
phobos, which can live without GC, so that you could use them 
from other @nogc functions.


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Mathias LANG via Digitalmars-d

On Thursday, 11 September 2014 at 10:02:39 UTC, ketmar via
Digitalmars-d wrote:

On Thu, 11 Sep 2014 09:43:19 +
Dicebot via Digitalmars-d  wrote:

If it is easy you are welcome to implement it and provide 
patches.
in no way. i'm already told that it will not happen in DMD, and 
i
myself have no needs in such translator. besides, i'm little 
busy with

gdc right now.

but it's not that hard. my estimation is 3 or 4 weeks max (or 
~1.5
weeks for dedicated full-time developer). note that translator 
is

necessary to translating parts of the compiler, so it can be
targeted at specific code, just like magicport.


So you are asking someone to do something you have no need of,
just because you think it's a good idea ? Daniel already poured
countless hours in DDMD, and you seem to underestimate his
investment, and the number of bugs he uncovered / fixed.

That being said, I see no issue for newcomer:
- If you want to play with D, download a compiler;
- If you really want bleeding edge, download a compiler then
build master with it;
- If you can't stand building a compiler with a downloaded
binary, that's not the community problem.

However, I haven't seen any post about how LDC and GDC will react
to this change. Will that influence the 1-frontend-late "rule" we
currently have, or require a lot of work on the glue layer ?


Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Daniel Murphy via Digitalmars-d

"Thiez"  wrote in message news:ywwfdsfqlxqcdchpt...@forum.dlang.org...
With regard to the whole self-hosting thing, perhaps it is worth copying 
the way Rust handles this:


https://github.com/rust-lang/rust/wiki/Note-compiler-snapshots

So in order to build one would usually download a binary compiler and use 
it to bootstrap, but in theory one can take the last version of the 
compiler before it became self-hosting, and build all the way to the 
current version (but it would take a long time).


And that's how it will work with ddmd.  Binaries will be available in the 
form of releases with at least the most recent one being able to compile 
master.  Exactly how many releases we maintain compatibility for is yet to 
be decided.


You can always pull the last C++ version from git and use that to step all 
the way up to the latest ddmd, if you feel it's important to waste lots of 
your time that way. 



Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-11 Thread via Digitalmars-d

On Tuesday, 9 September 2014 at 13:15:56 UTC, AsmMan wrote:

in : templatename‹params›
out: templatename!(params)


Why dou want to turn it into C++'s style? it will slow down the 
compiler time because we need to look at symbol table the type


Good question. I look at my D1 code and it is visually pleasing 
to look at. I look at D2 code and it looks like line noise in 
comparison.


This is just an experiment where I implement stuff that is easy 
to fix in the existing parser without changing too much. The 
ideal solution is to write a completely new parser with a new and 
more coherent syntax, but this is sufficient to get some ideas.


I won't know if I think it is a good or bad idea until I have 
played with it for several months or so. I want D2 features, but 
I also want a clear visual image in my editor.


I didn't find this one so bad but these symbols are hard to 
type on usual keyboard...


Yeah, but sometimes compact syntax is more important. It is worth 
experimenting with it, so I've started with symbols that are 
available on my own keyboard although I will do square(), logic 
symbols etc if it turns out to be a nice feature.


Again, I can't tell until I've tried it for a while.



Re: C++ interop - what to do about long and unsigned long?

2014-09-11 Thread bachmeier via Digitalmars-d
On Thursday, 11 September 2014 at 00:29:37 UTC, Andrei 
Alexandrescu wrote:

On 9/10/14, 4:16 PM, bachmeier wrote:
Not to go too far off topic, but C++ interoperability is at 0 
merit
points until you've got a GC-less standard library, finished 
shared
library support on all platforms, and tools that rival those 
available

for C++.


No. C++ interop and GC are related, but only loosely.


In terms of getting C++ developers to use D, this thread that 
just appeared is a perfect example of why C++ interop won't help 
until you can use the standard library without the GC.


http://forum.dlang.org/post/impaxeaowtuaxtgty...@forum.dlang.org


Re: C++ interop - what to do about long and unsigned long?

2014-09-11 Thread Gary Willoughby via Digitalmars-d
On Wednesday, 10 September 2014 at 20:41:45 UTC, Walter Bright 
wrote:
C++'s long and unsigned long can be accessed with c_long and 
c_ulong. Unfortunately, these are aliases and mangle to their 
underlying types.


Meaning that there is no way to interface to a C++ function 
declared as:


void foo(unsigned long);

So, what to do about this?

1. elevate c_long and c_ulong into full fledged types.

2. create full fledged types __c_long and __c_ulong, and alias 
c_long and c_ulong to them.


3. some sort of attribute?

The same issue exists for C++'s 'long double'.


How would these choices affect/handle the dynamic nature of the 
current aliases? e.g. c_long on Windows is an int bu on Posix 
systems it can be int or long depending on the CPU architecture.


Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d
Hello everyone! Being a C/C++ programmer I don't understand, why 
such language as D (system programming language) implemented 
garbage collector as a core feature, not as additional optional 
module or library. I and many other C/C++ programmers prefer to 
control things manually and use flexible allocation schemes that 
suitable for concrete situations. When every nanosecond matters, 
this is the only option, at least nowadays.


So this particular thing stops many of us from using D. When you 
can abandon performance, you usually choose Java (Scala) or C# 
because of their rich support and libraries. And the opposite 
matter is when you need high performance. In this case there is 
almost nothing to choose from. C/C++11 now looks not so bad.


And I think of idea of complete extraction of GC from D. For this 
to achieve, I suppose, I have to dig deeply into D compiler and 
also correct/re-implement many things in modules, so this ends up 
with almost new version of D.


I would like to hear your suggestions and some basic 
instructions. Maybe this is not a good idea at all or it will be 
very hard to realize.


Thank you for your attention!


  1   2   >