Re: 2D game engine written in D is in progress

2014-12-20 Thread Joakim via Digitalmars-d-announce
On Friday, 19 December 2014 at 17:21:43 UTC, ketmar via 
Digitalmars-d-announce wrote:
it is still unusable. i don't care what problems samsung or 
other oem

have, as i still got the closed proprietary system.


Not exactly, as the flourishing Android ROM scene shows.  While 
many people also jailbreak their Apple iDevices, it's not quite 
so easy to install your own ROM on them.  That comes from much of 
the source being open for Android, though certainly not all of it.



what google really
has with their open-sourceness is a bunch of people that 
works as
additional coders and testers for free. and alot of hype like 
hey,

android is open! it's cool! use android! bullshit.


What's wrong with reusing open-source work that has already been 
done in other contexts, through all the open source projects that 
are integrated into Android?  Those who worked for free did so 
because they wanted to, either because they got paid to do so at 
Red Hat or IBM and released their work for free or because they 
enjoyed doing it.  Nothing wrong with Android building on 
existing OSS.


As for the hype, the source google releases, AOSP, is completely 
open.  You're right that it's then closed up by all the hardware 
vendors, but I doubt you'll find one who hypes that it's open 
source.  So you seem to be conflating the two.


On Friday, 19 December 2014 at 18:50:14 UTC, ketmar via 
Digitalmars-d-announce wrote:

On Fri, 19 Dec 2014 18:23:59 +
Kagamin via Digitalmars-d-announce

Well, those people want to do that, so why not?


i have nothing against that, everyone is free to do what he 
want. what
i'm against is declaring android open project. it's 
proprietary

project with partially opened source.


I'd say open source project with proprietary additions. :) But 
AOSP is not particularly open in how it's developed, as google 
pretty much works on it on their own and then puts out OSS code 
dumps a couple times a year.  That's not a true open source 
process, where you do everything in the open and continuously 
take outside patches, as D does, but they do pull in patches from 
the several outside OSS projects they build on.


In any case, AOSP releases all their source under OSS licenses, 
not sure what more you want.


Re: 2D game engine written in D is in progress

2014-12-20 Thread ketmar via Digitalmars-d-announce
On Sat, 20 Dec 2014 10:58:58 +
Joakim via Digitalmars-d-announce
digitalmars-d-announce@puremagic.com wrote:

 Nothing wrong with Android building on existing OSS.
i never said that this is something wrong. unethical from my POV, but
not wrong.

 As for the hype, the source google releases, AOSP, is completely 
 open.  You're right that it's then closed up by all the hardware 
 vendors, but I doubt you'll find one who hypes that it's open 
 source.  So you seem to be conflating the two.
i see such people almost every day. i bought android-based smartphone
'cause android is open source! i still can't understand how buying
closed proprietary crap supports FOSS. and android is still proprietary
system with opened source, not FOSS.

Linux, by the way, is not a real FOSS for me. not until it will adopt
GPLv3, which will never happen.


signature.asc
Description: PGP signature


Re: Concise Binary Object Representation (CBOR) binary serialization library.

2014-12-20 Thread MrSmith via Digitalmars-d-announce

On Friday, 19 December 2014 at 22:25:57 UTC, Nordlöw wrote:

On Friday, 19 December 2014 at 18:26:26 UTC, MrSmith wrote:

Here is github link: https://github.com/MrSmith33/cbor-d
Destroy!


It would be nice to have a side-by-side comparison with 
http://msgpack.org/ which is in current use by a couple 
existing D projects, include D Completion Daemon (DCD) and a 
few of mine.


There is a comparison to msgpack here (and to other formats too): 
http://tools.ietf.org/html/rfc7049#appendix-E.2

which states:
   [MessagePack] is a concise, widely implemented counted binary 
serialization format, similar in many properties to CBOR, 
although somewhat less regular. While the data model can be used 
to represent JSON data, MessagePack has also been used in many 
remote procedure call (RPC) applications and for long-term 
storage of data.
   MessagePack has been essentially stable since it was first 
published around 2011; it has not yet had a transition. The 
evolution of MessagePack is impeded by an imperative to maintain 
complete backwards compatibility with existing stored data, while 
only few bytecodes are still available for extension.
   Repeated requests over the years from the MessagePack user 
community to separate out binary text strings in the encoding 
recently have led to an extension proposal that would leave 
MessagePack's raw data ambiguous between its usages for binary 
and text data. The extension mechanism for MessagePack remains 
unclear.


Re: Concise Binary Object Representation (CBOR) binary serialization library.

2014-12-20 Thread MrSmith via Digitalmars-d-announce

On Friday, 19 December 2014 at 22:33:57 UTC, BBaz wrote:

Do you know OGDL ?

http://ogdl.org/

It's currently the more 'appealing' thing to me for 
serialization.


That is interesting! Is there a D implementation?
Though, it looks like there is not much types of data there.


Re: Concise Binary Object Representation (CBOR) binary serialization library.

2014-12-20 Thread MrSmith via Digitalmars-d-announce

On Friday, 19 December 2014 at 22:46:14 UTC, ponce wrote:

On Friday, 19 December 2014 at 22:33:57 UTC, BBaz wrote:

On Friday, 19 December 2014 at 18:26:26 UTC, MrSmith wrote:
The Concise Binary Object Representation (CBOR) is a data 
format
whose design goals include the possibility of extremely small 
code
size, fairly small message size, and extensibility without 
the need
for version negotiation.  These design goals make it 
different from

earlier binary serializations such as ASN.1 and MessagePack.



When implementing CBOR serialization/parsing I got the 
impression that it was remarkably similar to MessagePack except 
late. Dis you spot anything different?


Not much in the sense of implementation, but it has text type, 
indefinite-length encoding, tags and can be easily extended if 
needed. I think of it as of better msgpack.


Re: 2D game engine written in D is in progress

2014-12-20 Thread Joakim via Digitalmars-d-announce
On Saturday, 20 December 2014 at 11:57:49 UTC, ketmar via 
Digitalmars-d-announce wrote:

i still can't understand how buying
closed proprietary crap supports FOSS. and android is still 
proprietary

system with opened source, not FOSS.


I'll tell you how.  First off, all the external OSS projects that 
AOSP builds on, whether the linux kernel or gpsd or gcc, get much 
more usage and patches because they're being commercially used.  
Android has had their linux kernel patches merged back upstream 
into the mainline linux kernel.


Once companies saw Android taking off, they started a non-profit 
called Linaro to develop the linux/ARM OSS stack, mostly for 
Android but also for regular desktop distros, and share resources 
with each other, employing several dozen paid developers who only 
put out OSS work, which benefits everyone, ie both OSS projects 
and commercial vendors:


http://en.wikipedia.org/wiki/Linaro

If they hadn't had success with Android commercially, there's no 
way they do that.  I keep making this point to you, that pure OSS 
has never and will never do well, that it can only succeed in a 
mixed fashion.


Linux, by the way, is not a real FOSS for me. not until it will 
adopt

GPLv3, which will never happen.


What will never happen is the GPLv3 ever taking off.


Re: 2D game engine written in D is in progress

2014-12-20 Thread Dicebot via Digitalmars-d-announce

On Saturday, 20 December 2014 at 15:02:59 UTC, Joakim wrote:
Linux, by the way, is not a real FOSS for me. not until it 
will adopt

GPLv3, which will never happen.


What will never happen is the GPLv3 ever taking off.


GPLv3 is single worst thing that ever happened to OSS


Re: 2D game engine written in D is in progress

2014-12-20 Thread ketmar via Digitalmars-d-announce
On Sat, 20 Dec 2014 15:02:57 +
Joakim via Digitalmars-d-announce
digitalmars-d-announce@puremagic.com wrote:

 I'll tell you how.  First off, all the external OSS projects that 
 AOSP builds on, whether the linux kernel or gpsd or gcc, get much 
 more usage and patches because they're being commercially used.
can i see some statistics? i hear that argument (it got more patches)
almost every time, but nobody can give any proofs. i can't see how x86
code generator got better due to android, for example. ah, didn't i
told you that i don't care about arm at all? somehow people telling me
about how android boosts something are sure that i do or should care
about that something. so i feel that i can do the same and argue that
i don't care.

 Android has had their linux kernel patches merged back upstream 
 into the mainline linux kernel.
that patches are of no use for me. why should i be excited?

 Once companies saw Android taking off, they started a non-profit 
 called Linaro to develop the linux/ARM OSS stack, mostly for 
 Android but also for regular desktop distros, and share resources 
 with each other, employing several dozen paid developers who only 
 put out OSS work, which benefits everyone, ie both OSS projects 
 and commercial vendors:
you did understand what i want to say, did you? ;-)

 I keep making this point to you, that pure OSS 
 has never and will never do well, that it can only succeed in a 
 mixed fashion.
why should i care if OSS will do well? i don't even know what that
means. it is *already* well for me and suit my needs. making another
proprietary crap do well changes nothing. more than that, it makes
people forget about F is FOSS. so i'm not interested in success of
OSS projects.

  Linux, by the way, is not a real FOSS for me. not until it will 
  adopt
  GPLv3, which will never happen.
 
 What will never happen is the GPLv3 ever taking off.
yes, corporate bussiness will fight for it's right to do tivoisation
and to hide the code till the end. that's why i'm not trying hard to
help non-GPLv3 projects, only occasional patches here and there if a
given issue is annoying me.


signature.asc
Description: PGP signature


Re: Concise Binary Object Representation (CBOR) binary serialization library.

2014-12-20 Thread Paolo Invernizzi via Digitalmars-d-announce

On Saturday, 20 December 2014 at 14:11:56 UTC, MrSmith wrote:

On Friday, 19 December 2014 at 22:25:57 UTC, Nordlöw wrote:

On Friday, 19 December 2014 at 18:26:26 UTC, MrSmith wrote:

Here is github link: https://github.com/MrSmith33/cbor-d
Destroy!


It would be nice to have a side-by-side comparison with 
http://msgpack.org/ which is in current use by a couple 
existing D projects, include D Completion Daemon (DCD) and a 
few of mine.


There is a comparison to msgpack here (and to other formats 
too): http://tools.ietf.org/html/rfc7049#appendix-E.2

which states:


I suggest to look also at Cap'n Proto, its author was the author 
of the original
google protobuf, and here [1] you can find some interesting 
insight about

serialization protocols.

I'm planning an implementation of cap'n proto for D...

Good job, anyway! ;-P

[1] http://kentonv.github.io/capnproto/news/
---
Paolo


Re: 2D game engine written in D is in progress

2014-12-20 Thread Joakim via Digitalmars-d-announce
On Saturday, 20 December 2014 at 15:48:59 UTC, ketmar via 
Digitalmars-d-announce wrote:

On Sat, 20 Dec 2014 15:02:57 +
Joakim via Digitalmars-d-announce
digitalmars-d-announce@puremagic.com wrote:

I'll tell you how.  First off, all the external OSS projects 
that AOSP builds on, whether the linux kernel or gpsd or gcc, 
get much more usage and patches because they're being 
commercially used.
can i see some statistics? i hear that argument (it got more 
patches)
almost every time, but nobody can give any proofs. i can't see 
how x86

code generator got better due to android, for example.


Why would we collect stats: what difference does it make if an 
OSS project is 10% commercially developed or 20%?  There are 
patches being sent upstream that would not be sent otherwise, 
that's all that matters.  As for the x86 code generator, Android 
has been available on x86 for years now: it's possible there were 
some patches sent back for that.



ah, didn't i told you that i don't care about arm at all?
somehow people telling me
about how android boosts something are sure that i do or should 
care
about that something. so i feel that i can do the same and 
argue that

i don't care.

Android has had their linux kernel patches merged back 
upstream into the mainline linux kernel.

that patches are of no use for me. why should i be excited?

Once companies saw Android taking off, they started a 
non-profit called Linaro to develop the linux/ARM OSS stack, 
mostly for Android but also for regular desktop distros, and 
share resources with each other, employing several dozen paid 
developers who only put out OSS work, which benefits everyone, 
ie both OSS projects and commercial vendors:

you did understand what i want to say, did you? ;-)

I keep making this point to you, that pure OSS has never and 
will never do well, that it can only succeed in a mixed 
fashion.
why should i care if OSS will do well? i don't even know what 
that
means. it is *already* well for me and suit my needs. making 
another
proprietary crap do well changes nothing. more than that, it 
makes
people forget about F is FOSS. so i'm not interested in 
success of

OSS projects.


You may not care about any of these patches for your own use, 
because you don't use ARM or whatever, but you certainly seem to 
care about FOSS doing well.  Well, the only reason FOSS suits 
your needs and has any usage today is precisely because 
commercial vendors contributed greatly to its development, 
whether IBM and Red Hat's contributions stemming from their 
consulting/support model or the Android vendors' support paid for 
by their mixed model.


You may resent the fact that it means some non-OSS software still 
exists out there and is doing well, but FOSS would be dead 
without it.  If that were the case, there would be almost no F, 
just try doing anything with Windows Mobile or Blackberry OS.  
Your F may be less than a hypothetical pure FOSS world, but 
that world will never exist.


 Linux, by the way, is not a real FOSS for me. not until it 
 will adopt

 GPLv3, which will never happen.

What will never happen is the GPLv3 ever taking off.
yes, corporate bussiness will fight for it's right to do 
tivoisation
and to hide the code till the end. that's why i'm not trying 
hard to
help non-GPLv3 projects, only occasional patches here and there 
if a

given issue is annoying me.


What you should worry about more is that not only has the GPLv3 
not taken off, but the GPLv2 is also in retreat, with more and 
more projects choosing permissive licenses these days.  The viral 
licensing approach of the GPLv2/v3 is increasingly dying off.


Re: Travis-CI support for D

2014-12-20 Thread Martin Nowak via Digitalmars-d-announce

On 12/15/2014 12:03 AM, Ellery Newcomer wrote:


trying it out with pyd, and I'm getting

ImportError: libphobos2.so.0.66: cannot open shared object file: No such
file or directory

are shared libraries supported?


Yes, shared libraries should work on linux.
Check that you're respecting LD_LIBRARY_PATH.
https://github.com/travis-ci/travis-build/pull/340/files#diff-ac986a81b67f1bd5851c535881c18abeR65


Re: 2D game engine written in D is in progress

2014-12-20 Thread ketmar via Digitalmars-d-announce
On Sat, 20 Dec 2014 17:12:46 +
Joakim via Digitalmars-d-announce
digitalmars-d-announce@puremagic.com wrote:

 On Saturday, 20 December 2014 at 15:48:59 UTC, ketmar via 
 Digitalmars-d-announce wrote:
  On Sat, 20 Dec 2014 15:02:57 +
  Joakim via Digitalmars-d-announce
  digitalmars-d-announce@puremagic.com wrote:
 
  I'll tell you how.  First off, all the external OSS projects 
  that AOSP builds on, whether the linux kernel or gpsd or gcc, 
  get much more usage and patches because they're being 
  commercially used.
  can i see some statistics? i hear that argument (it got more 
  patches)
  almost every time, but nobody can give any proofs. i can't see 
  how x86
  code generator got better due to android, for example.
 
 Why would we collect stats: what difference does it make if an 
 OSS project is 10% commercially developed or 20%?
'cause i want to know what much more means. 1? 10? 100? 1000? 1?
sure, 1 is much more than zero, as 1 is not nothing. but how much?

 There are 
 patches being sent upstream that would not be sent otherwise, 
 that's all that matters.
nope. when i see much more, i want to know how much is that much.

 As for the x86 code generator, Android 
 has been available on x86 for years now: it's possible there were 
 some patches sent back for that.
and it's possible that i sent even more patches. so what? why nobody
prise me for that? ah, i'm not a That Big Company that throws off their
leavings.

 You may not care about any of these patches for your own use, 
 because you don't use ARM or whatever, but you certainly seem to 
 care about FOSS doing well.
i still can't understand what doing well means. what i see is that
with corporations comes a rise of permissive licenses, and i can't
see that as good thing.

  Well, the only reason FOSS suits 
 your needs and has any usage today is precisely because 
 commercial vendors contributed greatly to its development
i don't think so. OpenBSD suits too. it just happens that i didn't
have an access to *BSD at the time, so i took Linux. yet i'm seriously
thinking about dropping Linux, as with all those commercial support
is suits me lesser and lesser.

 You may resent the fact that it means some non-OSS software still 
 exists out there and is doing well, but FOSS would be dead 
 without it.  If that were the case, there would be almost no F, 
 just try doing anything with Windows Mobile or Blackberry OS.  
 Your F may be less than a hypothetical pure FOSS world, but 
 that world will never exist.
this world is still not exist. and dropping F will not help it.

 What you should worry about more is that not only has the GPLv3 
 not taken off, but the GPLv2 is also in retreat, with more and 
 more projects choosing permissive licenses these days.  The viral 
 licensing approach of the GPLv2/v3 is increasingly dying off.
that's why i'm against OSS bs. the success of Linux is tied with it's
viral license. just look at FreeBSD: it started earlier, it has alot
more to offer when Linux was just a child, yet it's permissive
license leads to companies took FreeBSD and doing closed forks
(juniper, for example).


signature.asc
Description: PGP signature


Re: Blog: making sure your D projects won't break

2014-12-20 Thread Joseph Rushton Wakeling via Digitalmars-d-announce

On Monday, 15 December 2014 at 05:51:56 UTC, Dicebot wrote:
Ironically not a single of few projects I have tried adding 
currently compiles with a dmd git master - will add more as 
issues get resolved.


Well, that nudged me to get some fixes done, at least :-)

I'd like to reiterate my thanks for what I think will be a really 
key bit of work in providing rigorous quality assurance for D.  
It's something that I've wanted to see for a long time:

http://forum.dlang.org/thread/mailman.47.1369319426.13711.digitalmar...@puremagic.com

... but didn't have the time or expertise to address, so I'm 
really grateful you have stepped up to deliver this.


I'd really encourage other people with dub-compliant projects to 
sign up; let's leverage these testing opportunities to ensure 
both that our projects don't suffer bitrot, and that we have a 
good advance warning of any unintended (or intended!) breaking 
changes in the compiler and/or libraries.


In the longer run, it'd be great if this could become an official 
part of the D testing framework accessible from dlang.org.  It 
would also be really nice if we could have some sort of link 
between this system, and the project list on code.dlang.org: 
perhaps a traffic-light system where a project is marked green if 
it's compatible with current D release, amber if it works but 
triggers warnings, and red if it fails?


Whatever the future allows, thanks once again for being awesome 
:-)


Re: 2D game engine written in D is in progress

2014-12-20 Thread Joakim via Digitalmars-d-announce
On Saturday, 20 December 2014 at 18:49:06 UTC, ketmar via 
Digitalmars-d-announce wrote:

On Sat, 20 Dec 2014 17:12:46 +
Joakim via Digitalmars-d-announce
digitalmars-d-announce@puremagic.com wrote:

  Why would we collect stats: what difference does it make if an

OSS project is 10% commercially developed or 20%?
'cause i want to know what much more means. 1? 10? 100? 1000? 
1?
sure, 1 is much more than zero, as 1 is not nothing. but 
how much?


There are patches being sent upstream that would not be sent 
otherwise, that's all that matters.
nope. when i see much more, i want to know how much is that 
much.


That still doesn't answer the question of why anyone would spend 
time collecting stats when it's pointless to quantify anyway.  If 
it's 20%, is it all of a sudden worth it for you?  10%?  30%?


You may not care about any of these patches for your own use, 
because you don't use ARM or whatever, but you certainly seem 
to care about FOSS doing well.
i still can't understand what doing well means. what i see is 
that
with corporations comes a rise of permissive licenses, and i 
can't

see that as good thing.


I've explained in detail what doing well means: these hobbyist 
OSS projects, whether the linux kernel or gcc or whatever you 
prefer, would be unusable for any real work without significant 
commercial involvement over the years.  Not sure what's difficult 
to understand about that.


It's not just corporations using permissive licenses.  Many more 
individuals choose a permissive license for their personal 
projects these days, as opposed to emulating linux and choosing 
the GPL by default like they did in the past.


 Well, the only reason FOSS suits your needs and has any 
usage today is precisely because commercial vendors 
contributed greatly to its development
i don't think so. OpenBSD suits too. it just happens that i 
didn't
have an access to *BSD at the time, so i took Linux. yet i'm 
seriously
thinking about dropping Linux, as with all those commercial 
support

is suits me lesser and lesser.


You think OpenBSD did not also benefit from commercial help?

What you should worry about more is that not only has the 
GPLv3 not taken off, but the GPLv2 is also in retreat, with 
more and more projects choosing permissive licenses these 
days.  The viral licensing approach of the GPLv2/v3 is 
increasingly dying off.
that's why i'm against OSS bs. the success of Linux is tied 
with it's
viral license. just look at FreeBSD: it started earlier, it 
has alot

more to offer when Linux was just a child, yet it's permissive
license leads to companies took FreeBSD and doing closed forks
(juniper, for example).


The viral GPL may have helped linux initially, when it was mostly 
consulting/support companies like IBM and Red Hat using open 
source, so the viral aspect of forcing them to release source 
pushed linux ahead of BSD.  But now that companies are more used 
to open source and actually releasing products based on open 
source, like Android or Juniper's OS or llvm, they're releasing 
source for permissive licenses also and products make a lot more 
money than consulting/support, ie Samsung and Apple make a ton 
more money off Android/iOS than Red Hat makes off OS support 
contracts.


So the writing is on the wall: by hitching themselves to a better 
commercial model, permissive licenses and mixed models are slowly 
killing off the GPL.  I wrote about some of this and suggested a 
new mixed model almost five years ago:


http://www.phoronix.com/scan.php?page=articleitem=sprewell_licensing

What I predicted has basically come true with Android's enormous 
success using their mixed model, though I think my time-limited 
mixed model is ultimately the endgame.


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Jonathan Marler via Digitalmars-d
On Thursday, 18 December 2014 at 23:06:12 UTC, ketmar via 
Digitalmars-d wrote:

On Thu, 18 Dec 2014 22:46:04 +
Jonathan Marler via Digitalmars-d digitalmars-d@puremagic.com 
wrote:


What are peoples thoughts on having an inferred type for 
cast? Good/Bad idea? If good, how helpful would this be? 
Would this break existing code somehow? I think this feature 
would be a nice added convenience.  Not super helpful but nice 
to have.


Here's the details
---
Typed Cast: cast(T)v
   try to cast v to T
Type Inferred Cast: cast(auto)v
   try to cast v to whatever type is required in the current 
context


void foo(string s)
{
   // ...
}
void main()
{
 const(char)[] s = hello;
 foo(cast(string)s); // Current
 foo(cast(auto) s);  // The type of the cast is inferred 
to be a string

 foo(cast() s);  // Another possible syntax
}

This would help refactoribility.  If a function argument 
changes it's type, and the caller is using a cast, then the 
caller's cast type will be updated automatically.  Note that 
if it changes to an invalid type, the cast will still fail 
like normal.


using casts is a bad practice. and auto casts is... i can't 
even find
a word. the only thing this will help is to hide bugs, i 
believe.


please, don't.

nothing personal, i'm just terrified by the idea.


Performing a grep on phobos reveals there are currently almost 
3,000 casts.  I never like to use casts but they are a necessary 
evil.  I think anything D can do to help the programmer get their 
job done is a win.  The initial pro I saw for this idea was 
improving refactoribility.  I see this as a huge win.  You claim 
it will hide bugs?  Could you give an example?


When I requested other people to chime in on this idea I was more 
looking for data/examples.  Maybe someone will think of an 
example that shows this idea could encourage bad coding 
practices?  Maybe it will hide bugs?  My point is, I don't want 
to discourage you from providing your opinion, but I  would 
really like to understand where your opinion comes from. I hope 
that wasn't harsh, I've read other posts you've made on the 
forums and although I don't agree with everything you say I would 
value your assessment. Thanks.


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Jonathan Marler via Digitalmars-d
On Friday, 19 December 2014 at 15:17:04 UTC, Steven Schveighoffer 
wrote:

On 12/18/14 6:18 PM, Adam D. Ruppe wrote:
On Thursday, 18 December 2014 at 23:06:12 UTC, ketmar via 
Digitalmars-d

wrote:

the only thing this will help is to hide bugs, i believe.


On the contrary, I find explicit casts hide bugs. Suppose you 
write:


size_t a = cast(int) b;

It will compile and run. It'll mostly work. But the cast to 
int probably

wasn't intended (it was probably written in 32 bit code and not
correctly ported to 64 bit).

How often do we also write auto a = cast(T) b;? The difference 
would be
the type is written on the left side instead of the right. 
Might make an

important differnce when calling functions.

I think the auto cast is a win all around.


I have to agree with ketmar. Cast needs fixing, but this is not 
it. We need more control over what is cast, not less control.


Your example unwittingly shows the issue :) casts are blunt 
instruments that force the compiler to abandon it's checks. I'm 
not as concerned about a changing it's type as I am about b. 
Change the type of b, and the compiler still happily generates 
possibly disastrous code.


At this point, we can only say abandon ALL checks. We can't 
finely tune this. I think we need something more along the 
lines of C++'s casting directives.


And in answer to your above code snippet, I see no benefit for:

size_t a = cast(auto) b;

over:

auto a = cast(size_t) b;

-Steve


Nobody likes to use cast, but for now we are stuck with it.  
Creating alternatives to cast would be a great thing to discuss 
but doesn't really apply to the point at hand, which is, would 
cast(auto) be a useful extension to our current cast operator?  I 
think it could be.  In my opinion, if we allow return value types 
to be written as auto then it makes since to have cast(auto) as 
well.  In both cases the developer would need to look somewhere 
else to find what type auto actually gets resolved to.


The way I look at it, cast(auto) is like saying, hey compiler, I 
know this value can't be implicitly converted so just pretend 
like I've given you an explicit cast to whatever type you need.  
Note that this cast is still under the constraints of an explicit 
cast.  You can't cast a struct to an int or whatever. I can't 
really think of a case where this would be more dangerous than a 
typed explicit cast but I didn't think too long:)


Re: Lost a new commercial user this week :(

2014-12-20 Thread Dicebot via Digitalmars-d
On Saturday, 20 December 2014 at 07:26:30 UTC, Manu via 
Digitalmars-d wrote:

Thank you.

I get so frustrated by the apparent majority in this forum who 
seem to
think 'most' programmers are the type who would even read or 
post on a
forum like this. Or read a programming book! They must surely 
be the

overwhelming minority.


I am well-aware that people who actually have a passion for 
programming are tiny minority. Though many still read programming 
books because that adds some weight for career advancement 
requests.


But you call to support interests of those programmers at cost of 
interests of existing users - something absolutely impractical at 
current language development stage. Trying to explain why this 
expectation is not reasonable and not just is the least hostile 
reaction I can give.


Whatever Andrei says, there is no benefit in blindly pumping user 
base if you don't have resources to support it. I also remember 
him saying want million users, build as if you had million 
users. Well, currently we don't truly build even for thousand 
users.


You have been stressing those cases for several years now. Have 
you actually contributed anything to DMD to improve debug symbol 
generation seeing how this is important to you?


I keep asking you simple question you avoid answering - who 
personally should work to address your concerns? Those are all 
legit concerns and I doubt anyone would willingly prefer to keep 
things as they are. But who will do it if apparently you are the 
person who needs it most and you don't want to do it?


Re: Lost a new commercial user this week :(

2014-12-20 Thread Dicebot via Digitalmars-d

On Friday, 19 December 2014 at 10:47:28 UTC, Sergei Nosov wrote:
I think the most valuable point Manu made is that there are 
excellent and good programmers. The difference is not so 
much in the actual skills, but in the willing to spend time on 
programming.


I would avoid calling those groups good and excellent. This 
sounds dangerously elitist with no actual data to back it up - it 
is not uncommon to find better programmers in group you call 
good then some of those you call excellent.


Calling them not nerds and nerds is better way to get 
straight to the fundamentals :) And yes, I believe currently any 
effective usage of D requires at least one nerd in a team to act 
as a proxy for the rest.


pathfinding benchmark

2014-12-20 Thread Xiaoxi via Digitalmars-d

http://www.reddit.com/r/programming/comments/2pvf68/armv7_vs_x8664_pathfinding_benchmark_of_c_d_go/

didnt analyse the code, but D did quite well. :)


Re: Inferred Type for Explicit Cast

2014-12-20 Thread bearophile via Digitalmars-d

Jonathan Marler:

if we allow return value types to be written as auto then it 
makes since to have cast(auto) as well.


I think cast(auto) as I understand in your proposal introduces 
too often undefined situations, because in many cases the 
compiler can't know what's the type to cast to, and guessing is 
not acceptable. So I think it's not a good idea.


But sometimes I have code like this:

void main() {
int x;
byte y;
// ...
y = cast(typeof(y))x;
}


Here I want to cast x to the type of y to allow the assignment to 
y. This is perhaps an acceptable semantics for cast(auto):



void main() {
int x;
byte y;
// ...
y = cast(auto)x; // OK
}


In that case the inference for the casting type is easy because 
the type of y is already defined. More examples:



void foo(byte z) {}
void main() {
int x;
// ...
auto y = cast(auto)x; // Error
byte y = cast(auto)x; // OK, but not very useful
foo(cast(auto)x); // OK
}

Bye,
bearophile


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Dicebot via Digitalmars-d
On Friday, 19 December 2014 at 15:17:04 UTC, Steven Schveighoffer 
wrote:

On 12/18/14 6:18 PM, Adam D. Ruppe wrote:
On Thursday, 18 December 2014 at 23:06:12 UTC, ketmar via 
Digitalmars-d

wrote:

the only thing this will help is to hide bugs, i believe.


On the contrary, I find explicit casts hide bugs. Suppose you 
write:


size_t a = cast(int) b;

It will compile and run. It'll mostly work. But the cast to 
int probably

wasn't intended (it was probably written in 32 bit code and not
correctly ported to 64 bit).

How often do we also write auto a = cast(T) b;? The difference 
would be
the type is written on the left side instead of the right. 
Might make an

important differnce when calling functions.

I think the auto cast is a win all around.


I have to agree with ketmar. Cast needs fixing, but this is not 
it. We need more control over what is cast, not less control.


Your example unwittingly shows the issue :) casts are blunt 
instruments that force the compiler to abandon it's checks. I'm 
not as concerned about a changing it's type as I am about b. 
Change the type of b, and the compiler still happily generates 
possibly disastrous code.


At this point, we can only say abandon ALL checks. We can't 
finely tune this.


I'd like to have a cast where you must define both from and 
to types precisely.


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Dicebot via Digitalmars-d

It can be added trivially added to Phobos, of course, if more
people consider it useful.


Re: pathfinding benchmark

2014-12-20 Thread JN via Digitalmars-d

On Saturday, 20 December 2014 at 10:12:40 UTC, Xiaoxi wrote:

http://www.reddit.com/r/programming/comments/2pvf68/armv7_vs_x8664_pathfinding_benchmark_of_c_d_go/

didnt analyse the code, but D did quite well. :)


except for the fact that writeln didn't work :x


Re: pathfinding benchmark

2014-12-20 Thread bearophile via Digitalmars-d

Xiaoxi:


http://www.reddit.com/r/programming/comments/2pvf68/armv7_vs_x8664_pathfinding_benchmark_of_c_d_go/

didnt analyse the code, but D did quite well. :)


A little better D code:


import std.stdio, std.file, std.conv, std.string, std.datetime;

struct Route { uint dest, cost; }
alias Node = Route[];

Node[] readPlaces() {
auto lines = agraph.File.byLine;

immutable numNodes = lines.front.to!uint;
lines.popFront;
auto nodes = new Node[numNodes];

foreach (const line; lines) {
immutable nums = line.split.to!(uint[]);
if (nums.length  3)
break;
nodes[nums[0]] ~= Route(nums[1], nums[2]);
}

return nodes;
}

uint getLongestPath(in Node[] nodes, in uint nodeID, bool[] 
visited)

pure nothrow @safe @nogc {
visited[nodeID] = true;
typeof(return) dMax = 0;

foreach (immutable neighbour; nodes[nodeID])
if (!visited[neighbour.dest]) {
immutable dist = neighbour.cost +
 getLongestPath(nodes, 
neighbour.dest, visited);

if (dist  dMax)
dMax = dist;
}

visited[nodeID] = false;
return dMax;
}

void main() {
const nodes = readPlaces;
auto visited = new bool[nodes.length];

StopWatch sw;
sw.start;
immutable len = getLongestPath(nodes, 0, visited);
sw.stop;
printf(%d language D %d\n, len, sw.peek.msecs);
}




I don't remember if the D entry gets faster with ldc2 if nodes is 
immutable, this is a version that keeps it immutable as in the 
original code:



import std.stdio, std.file, std.conv, std.string, std.datetime, 
std.exception;


struct Route { uint dest, cost; }
alias Node = Route[];

Node[] readPlaces() {
auto lines = agraph.File.byLine;

immutable numNodes = lines.front.to!uint;
lines.popFront;
auto nodes = new Node[numNodes];

foreach (const line; lines) {
immutable nums = line.split.to!(uint[]);
if (nums.length  3)
break;
nodes[nums[0]] ~= Route(nums[1], nums[2]);
}

return nodes;
}

uint getLongestPath(immutable Node[] nodes, in uint nodeID, 
bool[] visited)

pure nothrow @safe @nogc {
visited[nodeID] = true;
typeof(return) dMax = 0;

foreach (immutable neighbour; nodes[nodeID])
if (!visited[neighbour.dest]) {
immutable dist = neighbour.cost +
 getLongestPath(nodes, 
neighbour.dest, visited);

if (dist  dMax)
dMax = dist;
}

visited[nodeID] = false;
return dMax;
}

void main() {
immutable nodes = readPlaces.assumeUnique;
auto visited = new bool[nodes.length];

StopWatch sw;
sw.start;
immutable len = getLongestPath(nodes, 0, visited);
sw.stop;
printf(%d language D %d\n, len, sw.peek.msecs);
}


But I think this is probably not necessary.

If you want you can submit the code to the original tester.

Bye,
bearophile


Re: What is the D plan's to become a used language?

2014-12-20 Thread Daniel Murphy via Digitalmars-d
Ola Fosheim Grøstad  wrote in message 
news:fjmtziqyopoyrpesz...@forum.dlang.org...


Yes, but it would be easy to define some focused goals for each release 
and refuse to touch stuff that belongs to a later release. E.g.


It would be easy to define such a list, but it would be near-impossible to 
force contributors to follow it.  Refusing to accept contributions outside 
the goals would most likely result in less contributions, not more focused 
contributions. 



Re: What is the D plan's to become a used language?

2014-12-20 Thread Bienlein via Digitalmars-d
I would say that D needs a usecase that puts it aside from other 
languages. For Java this was the Internet. For Go it was 
channel-based concurrency in conjunction with some style of green 
threads (aka CSP). It is now the time of server side concurrent 
programming. I would suggest to jump onto this wagon and add 
channels and green threads to D. When people successfully develop 
many server side systems this way as with Go the news will spread 
by itself. No killer app for D needed. Also Go does not have one.


-- Bienlein


Re: What is the D plan's to become a used language?

2014-12-20 Thread Paulo Pinto via Digitalmars-d

On Saturday, 20 December 2014 at 12:19:34 UTC, Bienlein wrote:
I would say that D needs a usecase that puts it aside from 
other languages. For Java this was the Internet. For Go it was 
channel-based concurrency in conjunction with some style of 
green threads (aka CSP). It is now the time of server side 
concurrent programming. I would suggest to jump onto this wagon 
and add channels and green threads to D. When people 
successfully develop many server side systems this way as with 
Go the news will spread by itself. No killer app for D needed. 
Also Go does not have one.


-- Bienlein


Go has Google's sponsorship, Docker and CoreOS.


Re: What is the D plan's to become a used language?

2014-12-20 Thread Dicebot via Digitalmars-d

On Saturday, 20 December 2014 at 12:19:34 UTC, Bienlein wrote:
I would say that D needs a usecase that puts it aside from 
other languages. For Java this was the Internet. For Go it was 
channel-based concurrency in conjunction with some style of 
green threads (aka CSP). It is now the time of server side 
concurrent programming. I would suggest to jump onto this wagon 
and add channels and green threads to D. When people 
successfully develop many server side systems this way as with 
Go the news will spread by itself. No killer app for D needed. 
Also Go does not have one.


CSP is not superior to message passing for concurrent server 
programming and D already beats Go in this domain, it is purely 
marketing crap. Stop repeating same statement over and over again 
with no technical data to back it up. Or just go and implement 
CSP if you want it so much - I doubt anyone would object merging 
it if it is already implemented.


Re: pathfinding benchmark

2014-12-20 Thread David Nadlinger via Digitalmars-d

On Saturday, 20 December 2014 at 10:36:23 UTC, JN wrote:

On Saturday, 20 December 2014 at 10:12:40 UTC, Xiaoxi wrote:

http://www.reddit.com/r/programming/comments/2pvf68/armv7_vs_x8664_pathfinding_benchmark_of_c_d_go/

didnt analyse the code, but D did quite well. :)


except for the fact that writeln didn't work :x


That's probably in reference to ARM. To be honest, I'm quite
surprised that the benchmark even worked that well, given the
current state of LDC ARM support.

David


Re: Inferred Type for Explicit Cast

2014-12-20 Thread ketmar via Digitalmars-d
On Sat, 20 Dec 2014 08:18:22 +
Jonathan Marler via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Performing a grep on phobos reveals there are currently almost 
 3,000 casts.  I never like to use casts but they are a necessary 
 evil.  I think anything D can do to help the programmer get their 
 job done is a win.  The initial pro I saw for this idea was 
 improving refactoribility.  I see this as a huge win.  You claim 
 it will hide bugs?  Could you give an example?
autocasting to the types of function arguments is the immediate weak
point. what you doing by this is destroying type checking. ok, we doing
that from time to time, but you proposing handy sytax for it which will
not break when function signature changes.

there is topic about kind of programmers nearby, where we found that
majority of programmers don't read books. and they tend to use the
first tool they see to make the job done, don't trying to evaluate
the long-term consequences. i can assure you that you'll see
`cast(auto)` all over their code, 'cause it's simply, it doesn't
require even looking at function signature and trying to grasp why it
accepts the given types, and ah, it's so refactorable! (they love
refactoring, 'cause they don't like to design their software, plus
refactoring is often the blessed way to do nothing really valuable
and still got payed).

besides, `cast` is a hack by itself. the good way to deal with hacks is
to make them less powerful, not more powerful. you are proposing to
make the hack more powerful. there is nothing bad in it... when the
language is designed for use by hardcore hackers. but for languages
with greater audience this is not a good way to go. hacks will
inevitably be abused. it doesn't matter how many times we write in big
letters: PLEASE, DON'T DO THAT! so hacks must be small and
fine-grained, not small and powerful.

besides, `cast(auto)` is not decipherable without analyzing the
expression (and, possibly, function signature). it is not clear what
result of `cast(auto)` will be. this is bad for hack.

why `cast` is hack? 'cause it turns off most of the compiler type
checking system. that's why we have `to!XXX` thing, which is not so
disasterous. and compiler is smart enough to see that `a0xff` is good
for ubyte, for example.

we can't make `cast` less powerful now, but what we surely shouldn't do
is making it more poweful.

 When I requested other people to chime in on this idea I was more 
 looking for data/examples.  Maybe someone will think of an 
 example that shows this idea could encourage bad coding 
 practices?  Maybe it will hide bugs?  My point is, I don't want 
 to discourage you from providing your opinion, but I  would 
 really like to understand where your opinion comes from. I hope 
 that wasn't harsh, I've read other posts you've made on the 
 forums and although I don't agree with everything you say I would 
 value your assessment. Thanks.
it comes from my past expirience. sometimes i just *sense* the smell.
powerful hacks are great troublemakers. making Cassandra prophecies is
my superpower. ;-)


signature.asc
Description: PGP signature


Re: What is the D plan's to become a used language?

2014-12-20 Thread Bienlein via Digitalmars-d

On Saturday, 20 December 2014 at 12:24:29 UTC, Paulo Pinto wrote:

On Saturday, 20 December 2014 at 12:19:34 UTC, Bienlein wrote:
I would say that D needs a usecase that puts it aside from 
other languages. For Java this was the Internet. For Go it was 
channel-based concurrency in conjunction with some style of 
green threads (aka CSP). It is now the time of server side 
concurrent programming. I would suggest to jump onto this 
wagon and add channels and green threads to D. When people 
successfully develop many server side systems this way as with 
Go the news will spread by itself. No killer app for D needed. 
Also Go does not have one.


-- Bienlein


Go has Google's sponsorship, Docker and CoreOS.


Message passing for concurrent server programming means 
asynchronous programming. Asynchronous programming is inherently 
difficult and error prone. I have done it for years and everyone 
else who has can confirm this.


The big thing with CSP-style channels is that while things are 
going on concurrently the code can be read like synchronous code. 
This way a lot of people out there have built server side systems 
with Go in record time. All the startups using Go are proof for 
this.


There is really a lot of technical data and scientic papers 
about this. The success of Go tells its own story. Also Rust will 
have a module for CSP-style concurrent programming. That comes 
for a reason.


This is the original paper titled Communicating Sequential 
Processes by C. A. R. Hoare: 
http://www.usingcsp.com/cspbook.pdf. CSP is not missing 
technical data. It has a solid basis and Go shows that it works 
well. It has drawbacks like lack of pre-emptiveness, but things 
like the C10K problem solved out of the box is more important to 
many server side systems to be built.


Apparently, for D some commercial big spender has never popped 
up. Killer apps to be developed need some good piece of fortune 
to turn out successfull. But adding a more adequate 
multi-threading/concurrency model to D and make it a success. And 
that takes little resources compared to other alternatives to 
make D more widely used.


Docker is not developed by Google. It is made by a company of its 
own who was looking for a language suitable for server-side 
concurrent programming. It could have been D if D better support 
for this.


Re: What is the D plan's to become a used language?

2014-12-20 Thread Bienlein via Digitalmars-d

On Saturday, 20 December 2014 at 12:21:49 UTC, Dicebot wrote:

CSP is not superior to message passing for concurrent server 
programming and D already beats Go in this domain, it is purely 
marketing crap. Stop repeating same statement over and over 
again with no technical data to back it up. Or just go and 
implement CSP if you want it so much - I doubt anyone would 
object merging it if it is already implemented.


A different approach would be to use D's existing multi-threading 
model and develop a first class actor system for it like Akka for 
Scala/Java. But that would be a big effort and competing with 
Akka would be difficult anyway. So, again, an idea to think of 
might be to add CSP to D.


Re: What is the D plan's to become a used language?

2014-12-20 Thread Dicebot via Digitalmars-d
People have already suggested you to actually try vibe.d at least 
once before repeating CSP is necessary for easy async mantra. 
How about actually doing so? vibe.d + std.concurrency gives you 
pretty much standard actor model - it lacks more complicated 
schedulers and network message passing but fundamentals are all 
there.


Re: What is the D plan's to become a used language?

2014-12-20 Thread Bienlein via Digitalmars-d

On Saturday, 20 December 2014 at 12:50:02 UTC, Dicebot wrote:
People have already suggested you to actually try vibe.d at 
least once before repeating CSP is necessary for easy async 
mantra. How about actually doing so? vibe.d + std.concurrency 
gives you pretty much standard actor model - it lacks more 
complicated schedulers and network message passing but 
fundamentals are all there.


CSP-style programming in D needs to be drop-dead simple as in Go 
to take off. You need to know about channels and the go command 
to spawn a thread and that's it. That's why Go was that 
successful. Vibe.d might be a well written system, but nobody 
will learn D and thereafter vibe.d. It is either laughable simple 
as in Go or it will not be noticed. The simplicity of channels 
and goroutines as in Go created the excitement. The same 
simplöicity is needed for any other language. The whole thing can 
be implemented with vibe.d, but at the surface there must only by 
goroutines and channels and nothing more.


Re: What is the D plan's to become a used language?

2014-12-20 Thread ponce via Digitalmars-d

On Saturday, 20 December 2014 at 12:39:01 UTC, Bienlein wrote:
This way a lot of people out there have built server side 
systems with Go in record time. All the startups using Go are 
proof for this.


I would be wary of extrapolating best practices from what 
startups do.
Startups succeed when they bet on the right market or propose 
something new and needed. I suspect technological choices play 
little part here, and that's why most companies using Go are 
startups: they could use almost anything and have the same 
outcome.


Successful rewrites from hyped language X to hyped language Y as 
pictured in blogs can also be misleading: almost all rewrites are 
rewrites of problematic systems in the first place, hence 
successful especially rewrites of young programs.


Re: Lost a new commercial user this week :(

2014-12-20 Thread Jacob Carlborg via Digitalmars-d

On 2014-12-19 20:20, Walter Bright wrote:


No. It's attributable to I use different methods of debugging.

The dmd source code is littered with debugging aids I've written. The
classic example is having a pretty-printer for each data structure. I
don't find the typical debugger pretty-printer to be adequate at all -
they never dump the type in the way that I think about the type.


It can still be handy with a debugger. If you have written a custom 
pretty-printer you can still call that one from the debugger.


In LLDB it's possible to write custom formatters/pretty-printer for your 
own types.


--
/Jacob Carlborg


Re: What is the D plan's to become a used language?

2014-12-20 Thread Paulo Pinto via Digitalmars-d

On Saturday, 20 December 2014 at 13:56:01 UTC, ponce wrote:

On Saturday, 20 December 2014 at 12:39:01 UTC, Bienlein wrote:
This way a lot of people out there have built server side 
systems with Go in record time. All the startups using Go are 
proof for this.


I would be wary of extrapolating best practices from what 
startups do.
Startups succeed when they bet on the right market or propose 
something new and needed. I suspect technological choices play 
little part here, and that's why most companies using Go are 
startups: they could use almost anything and have the same 
outcome.


Successful rewrites from hyped language X to hyped language Y 
as pictured in blogs can also be misleading: almost all 
rewrites are rewrites of problematic systems in the first 
place, hence successful especially rewrites of young programs.



That is why I seldom buy into hype driven development.

Usually on our teams if a specific technology wasn't explicitly 
requested by the customer, whoever is bringing it in has to 
answer what is the business value to the customer.




Re: What is the D plan's to become a used language?

2014-12-20 Thread ponce via Digitalmars-d

On Saturday, 20 December 2014 at 14:06:51 UTC, Paulo Pinto wrote:


That is why I seldom buy into hype driven development.

Usually on our teams if a specific technology wasn't explicitly 
requested by the customer, whoever is bringing it in has to 
answer what is the business value to the customer.


I think D clearly has some value for the business:
- highly expressive/productive,
- developer morale, feel-smart effect for better or worse
- high reuse
- lower bug counts when compared to C++

But much like there is hidden costs, those aspects can go 
unnoticed as hidden cost savings.


Re: Lost a new commercial user this week :(

2014-12-20 Thread Jacob Carlborg via Digitalmars-d

On 2014-12-20 08:46, Manu via Digitalmars-d wrote:


Perhaps this is habit, retained from a time where the tooling was
unreliable? You probably haven't spent the majority of your career in
an environment where you could reliably rely on them, and then as a
result, never came to rely on them?


I have tried debugging DMD using Xcode. Very often when I inspect 
variables the debugger thinks they're null when they're not. Quite often 
I don't have access to all variables I thought I would have access to.


I don't know if it's something special about the DMD code base, I have 
always thought of it as quite simple, no templates or use of the C++ 
standard library. Perhaps the Visual Studio debugger would work better.


But when it do work it can be very handy.

At work I have used a debugger, although this is for Ruby, which is a 
lot more reliable and a real pleasure to use. I can navigate inside 
objects, inspect their private state, call private methods and other 
more fancy stuff. Very handy.


--
/Jacob Carlborg


Rewrite rules for ranges

2014-12-20 Thread bearophile via Digitalmars-d
When you use UFCS chains there are many coding patterns that 
probably are hard to catch for the compiler, but are easy to 
optimize very quickly:


.sort().group.canFind = binary search

.sort().front = .reduce!min

.reverse.reverse = id

.reverse.back = .front

If the filtering and mapping functions are pure nothrow:
.map.filter = .filter.map

.map(a).map(b) = .map(compose(a, b))

and so on.

Such sub-optimal patterns can be written by programmers that are 
not caring a lot about performance, by mistake, or after inlining.


In Haskell this is so common that GHC has a feature named rewrite 
rules:


https://downloads.haskell.org/~ghc/6.12.2/docs/html/users_guide/rewrite-rules.html

https://www.haskell.org/haskellwiki/GHC/Using_rules

Such rules are simpler to do in Haskell because the code is much 
more pure compared to usual D code. But perhaps the same is 
possible and useful in D too.


Currently some rewrite rules are present inside the Phobos ranges.

Bye,
bearophile


Re: pathfinding benchmark

2014-12-20 Thread via Digitalmars-d

On Saturday, 20 December 2014 at 10:12:40 UTC, Xiaoxi wrote:

http://www.reddit.com/r/programming/comments/2pvf68/armv7_vs_x8664_pathfinding_benchmark_of_c_d_go/

didnt analyse the code, but D did quite well. :)


Look at the last results, C++ got updated.

x86-64
LanguageRuntime (ms)
C++ 1.74439
D   1828


Re: pathfinding benchmark

2014-12-20 Thread via Digitalmars-d

On Saturday, 20 December 2014 at 14:37:40 UTC, Théo Bueno wrote:

On Saturday, 20 December 2014 at 10:12:40 UTC, Xiaoxi wrote:

http://www.reddit.com/r/programming/comments/2pvf68/armv7_vs_x8664_pathfinding_benchmark_of_c_d_go/

didnt analyse the code, but D did quite well. :)


Look at the last results, C++ got updated.

x86-64
LanguageRuntime (ms)
C++ 1.74439
D   1828


Ah ah, someone forgot a multiplication somewhere, I guess :)
https://github.com/logicchains/LPATHBench/commit/47d5f676f278b8d8ba7c415ff9ef6deac5cf

Anyway this benchmark and these numbers are crap, using the
provided makefile I have totally different results on my laptop.


Re: What is the D plan's to become a used language?

2014-12-20 Thread Bienlein via Digitalmars-d

On Saturday, 20 December 2014 at 14:06:51 UTC, Paulo Pinto wrote:


That is why I seldom buy into hype driven development.


Okay, so Docker is hype? Have you seen the impact of it? Every 
Java magazine has articles about Docker. And that is not because 
Java people had an interest in it, because it is written in Go. 
It is because of its business value.


Have a look at all the job offers for Go developers here: 
http://www.golangprojects.com. All those jobs are the result of 
some hype.


Re: Rewrite rules for ranges

2014-12-20 Thread Matthias Bentrup via Digitalmars-d

On Saturday, 20 December 2014 at 14:16:05 UTC, bearophile wrote:

If the filtering and mapping functions are pure nothrow:
.map.filter = .filter.map


I think that one doesn't work.


Re: Rewrite rules for ranges

2014-12-20 Thread weaselcat via Digitalmars-d

Seems like something that would fit in well with dscanner.


Re: pathfinding benchmark

2014-12-20 Thread MattCoder via Digitalmars-d

On Saturday, 20 December 2014 at 14:51:54 UTC, Théo Bueno wrote:

Anyway this benchmark and these numbers are crap, using the
provided makefile I have totally different results on my laptop.


According to the site:

...Feel free to submit improvements to the implementations!

:)

Matheus.


Re: Lost a new commercial user this week :(

2014-12-20 Thread Martin Nowak via Digitalmars-d

On 12/17/2014 09:45 AM, Manu via Digitalmars-d wrote:

Well... when? I've been here 6 years. When can I start to use D for my work?
Other languages seem to have a higher velocity. Are we fighting a losing battle?


Other languages do much less than D which is a full-blown C++ replacement.
We've made huge progress in the past few years, look at the number of 
bugfixes and enhancements http://dlang.org/changelog.html and we 
introduced or finished several language features, that make D even more 
powerful (e.g. UFCS, UDA, alias this...). Still only very few people 
actually work on the compiler and we're also pretty bad in coordinating 
contributions.


Re: Lost a new commercial user this week :(

2014-12-20 Thread Martin Nowak via Digitalmars-d

On 12/20/2014 06:12 PM, Martin Nowak wrote:

On 12/17/2014 09:45 AM, Manu via Digitalmars-d wrote:
Other languages do much less than D which is a full-blown C++ replacement.
We've made huge progress in the past few years


Most important, we started to grow an ecosystem.
http://code.dlang.org/


Re: Rewrite rules for ranges

2014-12-20 Thread bearophile via Digitalmars-d

weaselcat:


Seems like something that would fit in well with dscanner.


Nope. It's not a bug for the programmer, it's an optimization 
pass for the compiler. And it should work after inlining.


Bye,
bearophile


What's missing to make D2 feature complete?

2014-12-20 Thread Martin Nowak via Digitalmars-d

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 314)
- finishing non-GC memory management


Re: Lost a new commercial user this week :(

2014-12-20 Thread Joakim via Digitalmars-d

On Saturday, 20 December 2014 at 17:13:04 UTC, Martin Nowak wrote:

On 12/17/2014 09:45 AM, Manu via Digitalmars-d wrote:
Well... when? I've been here 6 years. When can I start to use 
D for my work?
Other languages seem to have a higher velocity. Are we 
fighting a losing battle?


Other languages do much less than D which is a full-blown C++ 
replacement.
We've made huge progress in the past few years, look at the 
number of bugfixes and enhancements 
http://dlang.org/changelog.html and we introduced or finished 
several language features, that make D even more powerful (e.g. 
UFCS, UDA, alias this...). Still only very few people actually 
work on the compiler and we're also pretty bad in coordinating 
contributions.


One move that might work is providing help to those who want to 
get started on compiler hacking, by letting them know who those 
knowledgeable about dmd are and providing a venue for them to ask 
questions when getting started, ie some sort of mentoring into 
dmd hacking.  It's a bit alarming how few people dominate dmd 
development, especially compared to phobos:


https://github.com/D-Programming-Language/dmd/graphs/contributors

The dmd project needs to do a better job of growing more 
contributors, this is one good way to do it.  The alternative is 
to just let people jump in and sink or swim on their own, which 
might be a good filter to weed out the truly committed and 
capable from the rest, but also risks losing those who are 
capable but need some initial guidance.


For example, there could be a note on the README that says you 
should contact Kenji, Walter, Dan, or Martin if you need some 
help getting started with contributing to dmd, along with contact 
info (I assume Don and Brad might not be as interested).  You may 
say that all those people are reachable now, but without explicit 
permission like that, some people get intimidated about bothering 
someone like Walter.


Another suggestion is to actually write some docs for dmd, 
perhaps starting with frontend source layout and organization, 
similar to the brief list for the backend:


https://github.com/D-Programming-Language/dmd/blob/master/src/backend/backend.txt

Of course, writing docs is always a tall order, but if it leads 
to more contributors, it can pay off in spades.


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Jonathan Marler via Digitalmars-d

On Saturday, 20 December 2014 at 10:14:10 UTC, bearophile wrote:

Jonathan Marler:

if we allow return value types to be written as auto then it 
makes since to have cast(auto) as well.


I think cast(auto) as I understand in your proposal 
introduces too often undefined situations, because in many 
cases the compiler can't know what's the type to cast to, and 
guessing is not acceptable. So I think it's not a good idea.


You gave one example of an 'undefined situation' (auto y = 
cast(auto)x). However this situation is clearly an error (not 
undefined). I think what you meant was that the type of auto is 
undefined, not the situation itself.  To this you are correct.  I 
fail to see why that makes this a bad idea. The whole concept of 
'auto' is not defined in many situations but that doesn't make it 
a bad idea. If you were a programmer and you declared a variable 
y using auto, why would you think you need to cast it?


int x;
auto y = x;  // Normal
auto y = cast(int)x; // huh? there's not reason to do this

The only reason I could think of is when you want to explicitly 
say what type y should be.  But why would you do that using a 
cast? Just declare y as the type you want to be and skip 'auto'.


byte y = cast(byte)x;

Ah but now you have to keep the type of y and the cast in sync. 
 So you could write this;


auto y = cast(byte)x;

It looks a little weird but it works.  I however think that if 
you want y to be a specific type, you should declare it as such 
(instead of using auto).


byte y = cast(byte)x;

We're back to the previous example. However now we are faced with 
the same problem of keeping the 2 types in sync. You could use 
cast(typeof(y)) to solve this, or you could use cast(auto).


byte y = cast(typeof(y))x;
byte y = cast(auto)x;

cast(auto) looks nicer but both  work fine.  The problem with 
cast(typeof(y)) is that it doesn't work when you are assigning x 
to an unnamed variable (like a function argument).




But sometimes I have code like this:

void main() {
int x;
byte y;
// ...
y = cast(typeof(y))x;
}


Here I want to cast x to the type of y to allow the assignment 
to y. This is perhaps an acceptable semantics for cast(auto):



void main() {
int x;
byte y;
// ...
y = cast(auto)x; // OK
}


In that case the inference for the casting type is easy because 
the type of y is already defined. More examples:



void foo(byte z) {}
void main() {
int x;
// ...
auto y = cast(auto)x; // Error
byte y = cast(auto)x; // OK, but not very useful
foo(cast(auto)x); // OK
}

Bye,
bearophile-


Lastly I would like to say that cast(auto) provides a bit of 
functionality that is currently nowhere in the language.  It's 
not the same as cast(typeof(var)) since that cast needs a named 
variable to refer to. It would provide some functionality for 
templates that are currently nowhere in the langauge. I haven't 
thought of an example yet but if you think of one let me know:)


Re: Lost a new commercial user this week :(

2014-12-20 Thread Daniel Davidson via Digitalmars-d

On Friday, 19 December 2014 at 19:20:15 UTC, Walter Bright wrote:

On 12/19/2014 7:38 AM, Daniel Davidson wrote:

Could this lack of need be
attributable to understanding of the entire code base being 
used?


No. It's attributable to I use different methods of debugging.

The dmd source code is littered with debugging aids I've 
written. The classic example is having a pretty-printer for 
each data structure. I don't find the typical debugger 
pretty-printer to be adequate at all - they never dump the type 
in the way that I think about the type.


Sure, sounds like a winning strategy. Probably not applicable, 
but were you to run into an issue with vibe or websockets would 
you proceed to write pretty printers for the supplied data 
structures, the returned data structures, etc, or would you live 
with the not so pretty gdb structures just to get your debug 
session over with?


The point is it seems like more of your work on code is working 
with your own code - i.e. fewer outside dependencies or outside 
dependencies that you are intimately familiar with due to years 
of experience. This reduces the benefit or need of the debugger.


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Jonathan Marler via Digitalmars-d
On Saturday, 20 December 2014 at 12:36:50 UTC, ketmar via 
Digitalmars-d wrote:

On Sat, 20 Dec 2014 08:18:22 +
Jonathan Marler via Digitalmars-d digitalmars-d@puremagic.com 
wrote:


Performing a grep on phobos reveals there are currently almost 
3,000 casts.  I never like to use casts but they are a 
necessary evil.  I think anything D can do to help the 
programmer get their job done is a win.  The initial pro I 
saw for this idea was improving refactoribility.  I see this 
as a huge win.  You claim it will hide bugs?  Could you give 
an example?
autocasting to the types of function arguments is the immediate 
weak

point.


I see this as one of it's strong points.  It provides greater 
refactoribility, and also has the potential for using casts 
inside templates to unnamed variables (like function arguments).


..what you doing by this is destroying type checking. ok, we 
doing
that from time to time, but you proposing handy sytax for it 
which will

not break when function signature changes.


casting is not destroying the type checking.  You can't convert a 
value to any type (an int to a struct), it still has to be 
possible.


struct MyStruct{...}
int x;
MyStruct s;
x = cast(int)s; // doesn't work
x = cast(auto)s; // still doesn't work



there is topic about kind of programmers nearby, where we 
found that
majority of programmers don't read books. and they tend to 
use the
first tool they see to make the job done, don't trying to 
evaluate

the long-term consequences. i can assure you that you'll see
`cast(auto)` all over their code, 'cause it's simply, it doesn't
require even looking at function signature and trying to grasp 
why it
accepts the given types, and ah, it's so refactorable! (they 
love
refactoring, 'cause they don't like to design their software, 
plus
refactoring is often the blessed way to do nothing really 
valuable

and still got payed).


I'm not sure you'll get too many good programmers who agree with 
you that refactoring is only caused by lack of design.  I'm not 
sure where to start when trying to list all the reasons someone 
would want to refactor.  New library, better design patter is 
realized, new feature has come in, new person has come in with a 
better solution, new api is introduced somewhere...so many 
reasons.  Programmer's aren't perfect, if we did everything right 
the first time then we never would have had c++ :)  D itself is 
just a refactor.


To address your other concern about seeing cast(auto) all over 
someone's code.  I think you might start seeing cast(auto) in 
place of some cast(T), but why would you start seeing cast(auto) 
in other places?  Nobody writes cast when it's unnecessary:


int x;
int y = cast(int)x;

Why would anyone do this?  Don't tell me it's because the 
programmer is too stupid to know what a cast means, you could 
make that same argument for any feature.  Some languages have 
used that argument to remove any number of unsafe features like 
pointer arithmetic, non-garbage collected memory, etc.  The 
purpose of D is not to restrict the programmer so they can't make 
any mistakes, it's purpose is to make programming easier and of 
course safer but not at the expense of removing the ability to do 
the unsafe operations when needed. It is true that 'cast' reduces 
type checking, but it doesn't remove it and it's also necessary.




besides, `cast` is a hack by itself. the good way to deal with 
hacks is
to make them less powerful, not more powerful. you are 
proposing to
make the hack more powerful. there is nothing bad in it... when 
the
language is designed for use by hardcore hackers. but for 
languages

with greater audience this is not a good way to go. hacks will
inevitably be abused. it doesn't matter how many times we write 
in big

letters: PLEASE, DON'T DO THAT! so hacks must be small and
fine-grained, not small and powerful.


cast is not a hack, it's a necessary feature that is well 
defined.  It reduces type safety, which may be why you are 
calling it a hack, but unfortunately it is necessary.  
cast(auto) is a natural extension to a necessary feature that's 
not going to go away. Type safety is not a simple problem.  The 
more type safe you get the more restrictive your language gets, 
which causes more need for casting.  The less type safe your 
language is, the less casting you need but then the more unsafe 
your whole program becomes.  D often chooses to make 90% of code 
safe and provides ways for you to escape the safety of the 
language for the extra 10%.  Cast falls into the category of the 
10%. If you go through some examples I think you'll find that 
cast(auto) used in the cases where it makes sense actually 
produces code that results in less bugs.  But like any tool, of 
course it can be misused, but that's not a reason to not have it.




besides, `cast(auto)` is not decipherable without analyzing the
expression (and, possibly, function signature). it is not clear 
what

result of `cast(auto)` will be. 

Re: What's missing to make D2 feature complete?

2014-12-20 Thread Vic via Digitalmars-d
As a commercial user (but non contributor) of D, here is my 
suggestion:

- remove GC and memory management as default
- find all features that are not being maintained or are just top 
heavy and deprecate.

- find features that should or could be downstream, and deprecate.

Vic
- http://www.quotationspage.com/quote/26979.html


On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management




Re: What's missing to make D2 feature complete?

2014-12-20 Thread Jonathan Marler via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


scope would be nice:)


Re: Inferred Type for Explicit Cast

2014-12-20 Thread ketmar via Digitalmars-d
On Sat, 20 Dec 2014 18:19:21 +
Jonathan Marler via Digitalmars-d digitalmars-d@puremagic.com wrote:

  we can't make `cast` less powerful now, but what we surely 
  shouldn't do
  is making it more poweful.
 
 You're right we can't make it less powerful because it's 
 necessary.  If you have an idea on how D could get rid of it we 
 would all love to hear it:)

something like modula's system module: if you need `cast`, or
pointers, or other low-level things, you should import system
module. and you HAVE to write `system.cast(...)`. make it cumbersome!
let `to!XXX` do the work, with all it's checks. making unsafe features
easier to use is a way to spread their usage, and i see it as a
generally bad thing.


signature.asc
Description: PGP signature


Re: Rewrite rules for ranges

2014-12-20 Thread via Digitalmars-d

On Saturday, 20 December 2014 at 14:16:05 UTC, bearophile wrote:
In Haskell this is so common that GHC has a feature named 
rewrite rules:


Pure is fully based on term rewriting:

http://purelang.bitbucket.org/


Re: What is the D plan's to become a used language?

2014-12-20 Thread Vic via Digitalmars-d
First, thank you all the committers for a 'gifted free' lang that 
we use to build a company, we could have used any lang, we chose 
D.


My point is on 'management' more than on 'software'. On 
management, *EVERY* project is resource constrained, so imo, D 
should figure out what resources it has at hand. Based on that 
prioritize what can be maintained and what can't be maintained 
and hence marked as deprecated (so those that do care for it can 
move it downstream). It's painful to kill many scared cows. I 
used example or CLR and JRE team size relative to their 'features 
surface area'.


Also, I'm pleased that 'no' is said at times (but ... we are 
still adding things right, w/o saying: and if we add that, what 
are 2 features we can move downstream?'. Last I'm hearing is 
Andreii will gift C++ compatibility, etc into core. **: reason to 
split up forum into users and public comitters so people like me 
don't panic)

Cheers,
Vic
ps:
Second smaller thing I 'elude' to but don't verbalize in that 
argument is my personal preference for a smaller language. Less 
is better/faster. I proposed to move those deprecated  features 
'downstream', just like Linux Kernel and Linux GNU are separated 
(but can't exist w/o each other). To build an eco system.
(here is comments on C++ having more features, one of the reasons 
I like smaller

http://harmful.cat-v.org/software/c++/linus
I do see 'featuritis' http://bit.ly/1wzVPMR as a way to doom 
projects in a compound way )


As to Walter (yes I used Wacom c compiler) saying No, I think he 
is to nice and 99.5% is not good enough, I'd like him to be a mean

- http://www.brainyquote.com/quotes/quotes/l/linustorva141511.html
and start removing things. The list of candidates is long, GC has 
been brought up as something that can be moved downstream.
D could have reference counters in base classes, but end users 
could INJECT a 3rd party GC mechanism they like. It's same thing, 
but downstream. Also I showed example of Go exceptions being 
downstream.
I'm not saying these 2 (our of 100) are not nice features, I'm 
saying if 'we' were forced, they could be moved downstream. You 
can just open Andreii's D book table of contents and find over 
weight things - if you are motivated to do that.



On Friday, 19 December 2014 at 16:44:59 UTC, Joakim wrote:
On Friday, 19 December 2014 at 15:11:30 UTC, Tobias Pankrath 
wrote:

On Friday, 19 December 2014 at 14:58:07 UTC, Joakim wrote:
On Friday, 19 December 2014 at 14:38:02 UTC, Tobias Pankrath 
wrote:
As for Walter already saying no a lot, given how many 
features D has, obviously one can still wish he went from 
99% no to 99.5%. ;)  You don't need to be around the D 
community forever to feel that D still has too many 
features that made it in.


Care to name a few and justify why exactly those features 
should be gone?


No, as that's not really my problem.  I was simply trying to 
clarify the argument others have made, that the language 
seems overstuffed and overwhelming, which I have experienced 
at times but I'm not personally complaining about.


It is a worthless claim to make that there is too much of 
something, if you cannot come up with an concrete example. 
I've got that gut feeling, that is not even remotely an 
argument and just kills time of everyone in this discussion.


If we want to discuss the future of the language, it's totally 
pointless to do it in an abstract way. “We need to make the 
language more stable“ is not a goal or something, it is 
totally unclear what that actually means, why this is 
important in the first place, how we can say that we have 
accomplished it or what we need to do to realise that goal.


I have no dog in this fight.  I was merely pointing out to 
Walter and Mike that it's possible to say no a lot and still 
have others wish you had said no even more. :) There's no 
particular feature that I wish wasn't there, though of course 
there are many features that many wish were implemented or 
worked together better, as deadalnix points out.


When Vic suggested a split into a stable core and an 
experimental layer, I suggested documenting the perceived 
stability of various features instead, so that users could have 
a guide for what features might be more problematic without 
having to do a deep-dive in bugzilla to figure it out for 
themselves.  I didn't back a split or have not suggested 
removing features.




Re: What's missing to make D2 feature complete?

2014-12-20 Thread safety0ff via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.



Multiple alias this (DIP66 / #6083.)


D lang sillicon valley pre-planning meeting week of 12th

2014-12-20 Thread Vic via Digitalmars-d

Respond/participate here:
http://www.meetup.com/D-Lang-Sillicon-Valley/messages/boards/thread/48587409


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Jonathan Marler via Digitalmars-d
On Saturday, 20 December 2014 at 19:00:57 UTC, ketmar via 
Digitalmars-d wrote:

On Sat, 20 Dec 2014 18:19:21 +
Jonathan Marler via Digitalmars-d digitalmars-d@puremagic.com 
wrote:


 we can't make `cast` less powerful now, but what we surely 
 shouldn't do

 is making it more poweful.

You're right we can't make it less powerful because it's 
necessary.  If you have an idea on how D could get rid of it 
we would all love to hear it:)


something like modula's system module: if you need `cast`, or
pointers, or other low-level things, you should import 
system
module. and you HAVE to write `system.cast(...)`. make it 
cumbersome!
let `to!XXX` do the work, with all it's checks. making unsafe 
features

easier to use is a way to spread their usage, and i see it as a
generally bad thing.


H...an interesting concept.  Make unsafe things harder to do 
to discourage their use.  I'm not sure if I agree but maybe you 
can convince me. Where do you draw the line?  If you wanted to 
make these features harder to use why not make the user type the 
word cast in a different language like Japanese?  Or why not take 
cast out all together and force the developer to use assembly or 
another language?  You said you could make the user import an 
unsafe module, why not go a step further and have them type in a 
password, or have them write out a long specific sequence to 
perform the cast:


byte x = cast$%*%@*@(*#!(($)$@)@$(@($)@$$*@(@**(!(byte) y;

I don't agree that making things harder to use is a good idea, 
but I do agree that making things more verbose can be a good idea 
to prevent misuse.  It makes sense because in order to use cast 
you should know what it means and if you know what it means you 
should know the verbose syntax to use it.  That concept makes 
sense.  But in my opinion, the cast keyword is enough.  It 
provides a way to audit your code by grepping for the cast 
keyword, and any programmer that sees the word cast and doesn't 
understand it will likely look up what it means.  I could get on 
board with adding a restriction that you tell the compiler 
somehow that you are going to use casts in your file (like 
importing the system module), but when you go to use it, I 
don't think it should be harder to use.  cast(auto) may be 
easier to use but it also makes more sense in some cases.  I've 
listed some of the cases in my other posts but the general idea 
is it requires less maintenance. Say you have the following:


void myfunc(uint x)
{
ubyte val;
//
// ...
//
val = cast(ubyte)x;
}

Now let's say that you need change the value of 'val' to ushort.

void myfunc(uint x)
{
ushort val;
//
// ...
//
val = cast(ubyte)x;
}

Now we have a bug that could be incredibly difficult to find and 
will likely not be found through typical unit testing.  It will 
probably result in weird unexpected runtime crashes as the 
invalid cast error propogates through the program until something 
throws an exception.  The error is caused by having to write the 
same thing in 2 different places.  You have to write the type of 
'val' in the declaration and in the cast. This could have been 
avoided if you used cast(typeof(val)), but what if the name of 
val changes?  Then you have the same situation if another 
variable get's renamed to 'val'.  If you use cast(auto) you don't 
need to maintain the type in both places.  It prevents this 
particular bug.


Now I'm not saying that cast(auto) is good in all cases, I'm just 
trying to get you to see the big picture here.  cast(auto) could 
be useful and could help prevent maintenance bugs in SOME cases.  
Yes it can be misused, but I don't agree that making it harder to 
use is a good way to prevent misusage, but requiring more 
verbosity is good.  You may disagree that cast is enough 
verbosity but I think that's a matter of opinion.








Re: What is the D plan's to become a used language?

2014-12-20 Thread via Digitalmars-d
On Saturday, 20 December 2014 at 12:13:42 UTC, Daniel Murphy 
wrote:
It would be easy to define such a list, but it would be 
near-impossible to force contributors to follow it.


Hardly, you have to be specific and make the number of issues 
covered in the next release small enough to create a feeling of 
being within reach in a short time span. People who don't care 
about fixing current issues should join a working group focusing 
on long term efforts (such as new features, syntax changes etc).


Refusing to accept contributions outside the goals would most 
likely result in less contributions, not more focused 
contributions.


That's good, people should not expect experimental features or 
unpolished implementations to be added to the next release. What 
goes into the next release should be decided on before you start 
on it.





ot: vibe.d forum

2014-12-20 Thread Vic via Digitalmars-d

vibe.d forum is down, can't post messages.


Re: What's missing to make D2 feature complete?

2014-12-20 Thread Benjamin Thaut via Digitalmars-d

Am 20.12.2014 18:39, schrieb Martin Nowak:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 314)
- finishing non-GC memory management


Shared library support on Windows ;-)


Re: std::string responsible for half the allocations in chrome

2014-12-20 Thread David Nadlinger via Digitalmars-d
On Saturday, 20 December 2014 at 02:14:37 UTC, Andrei 
Alexandrescu wrote:

RCString is the solution. http://dpaste.dzfl.pl/817283c163f5 --


How would refcounting help when the issue is const vs. immutable 
string slices?


David


Re: Inferred Type for Explicit Cast

2014-12-20 Thread ketmar via Digitalmars-d
On Sat, 20 Dec 2014 19:26:54 +
Jonathan Marler via Digitalmars-d digitalmars-d@puremagic.com wrote:

 H...an interesting concept.  Make unsafe things harder to do 
 to discourage their use.  I'm not sure if I agree but maybe you 
 can convince me. Where do you draw the line?
the thing is that they aren't only harder to use, but they are also
explicitly marked as system things. i.e. things that should be used
only if you are *really* know what you're doing.

besides, code with `system.cast()` looks suspiciously burden, which
also signals that this is not a thing one should mindlessly use
everywhere.

something like that.

 is it requires less maintenance. Say you have the following:
 
 void myfunc(uint x)
 {
  ubyte val;
  //
  // ...
  //
  val = cast(ubyte)x;
 }
 
 Now let's say that you need change the value of 'val' to ushort.
 
 void myfunc(uint x)
 {
  ushort val;
  //
  // ...
  //
  val = cast(ubyte)x;
 }
why do you need `cast` here? isn't `to!ubyte` looks better? and it's
not only looks better, it will provide you additional overflow checks
too. so if you'll throw a '1024' in your unittest, the second version
will fail, signalling that something is going wrong.

 Now I'm not saying that cast(auto) is good in all cases, I'm just 
 trying to get you to see the big picture here.  cast(auto) could 
 be useful and could help prevent maintenance bugs in SOME cases.
i see that `cast` should NOT be used in that cases at all. and this
returns us to `cast` is hack. avoid the hacks!

 Yes it can be misused, but I don't agree that making it harder to 
 use is a good way to prevent misusage, but requiring more 
 verbosity is good.  You may disagree that cast is enough 
 verbosity but I think that's a matter of opinion.
sure, i'm just talking about what i see as good, i'm in no way trying
to tell that my opinion is the best one. sorry if my words are too
forcing. i'm a somewhat sarcastic person IRL, and i don't use English
in my everyday life, so i can sound too hard sometimes, failing to
properly translate my twisted style of writing. ;-)


signature.asc
Description: PGP signature


Re: What's missing to make D2 feature complete?

2014-12-20 Thread Kiith-Sa via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


D as a language is feature complete enough for me as is; 
improving the compiler, fixing remaining major compiler 
bugs/inconsistencies between spec and compiler  is more important 
for me. Maybe the ability to force-inline if nothing else.


Outside the language itself:
- Phobos could obviously use some fixing
  (especially obsolete stuff without a real replacement like 
std.stream)

- a GC that doesn't suck would help
  (I see people working on that from time to time, never gets 
finished/integrated)
- A finished std.allocator would help, whether or not Phobos uses 
it internally

- std.simd
- Proposed changes with GC/RC/manual allocation in would
  be very useful, but I expect that to take a shitload of time,
  assuming it doesn't get derailed and replaced by a yet more 
grandiose
  idea (remember TempAlloc - std.allocator - now this - nothing 
of that

  got finished)

Also, this pisses me off way too often: a way to build 
documentation as easily as doxygen Doxyfile (no need to write 
own CSS to get a non-atrocious result, no messing with 
dependencies because DMD needs to import files I'm not building 
documentation with, no assuming I have a server by default, no 
generating files to feed to another program) and get a 
ready-for-use, readable, static HTML-CSS result. All of DMD/DDoc, 
ddox and harbored are too involved. (I would also prefer to have 
Markdown or ReST within DDoc, e.g. I don't find $(B bold) to be 
readable, I'll probably eventually try to eventually implement 
that myself).



.. that ended up surprisingly long.
   TLDR: language is good, Phobos needs work, doc generation sucks


Re: What's missing to make D2 feature complete?

2014-12-20 Thread weaselcat via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


Unique! and RefCounted! in a usable state.


Re: What's missing to make D2 feature complete?

2014-12-20 Thread via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.


I think the main problem is what is there already, which prevents 
more sensible performance features from being added and also is 
at odds with ensuring correctness.


By priority:

1. A well thought out ownership system to replace GC with 
compiler protocols/mechanisms that makes good static analysis 
possible and pointers alias free.  It should be designed before 
scope is added and a GC-free runtime should be available.


2. Redesign features and libraries to better support AVX 
auto-vectorization as well as explicit AVX programming.


3. Streamlined syntax.

4. Fast compiler-generated allocators with pre-initialization for 
class instancing (get rid off emplace). Profiling based.


5. Monotonic integers (get rid of modular arithmetics) with range 
constraints.


6. Constraints/logic based programming for templates

7. Either explict virtual or de-virtualizing class functions 
(whole program optimization).


8. Clean up the function signatures: ref, in, out, inout and get 
rid of call-by-name lazy which has been known to be a bug 
inducing feature since Algol60. There is a reason for why other 
languages avoid it.


9. Local precise GC with explicit collection for catching cycles 
in graph data-structures.


10. An alternative to try-catch exceptions that enforce 
error-checking without a performance penalty. E.g. separate error 
tracking on returns or transaction style exceptions (jump to 
root and free all resources on failure).


Re: What's missing to make D2 feature complete?

2014-12-20 Thread via Digitalmars-d

I forgot:

1.5 Explicit inlining and fixing the import system.


Re: Lost a new commercial user this week :(

2014-12-20 Thread Atila Neves via Digitalmars-d
i'm not talking about excellent programmes, i'm talking about 
basic
CS knowledge. why crc32 is bad hashing function? how to 
negate
machine integer without using unary minus operator? what is 
closure,
what is continuation and how they differ? and so on. those are 
basic

questions, yet i was forced to add those to my interviews.


I sincerely have no idea why you consider this to be basic 
knowledge. If we're talking C programmers, the vast majority 
won't know what a closure or a continuation is. Maybe the working 
environments you've been exposed to are drastically different 
from the ones I have. I've been interviewing a ton of candidates 
at work recently, and in my experience your expectations are way 
off.


People like us that even bother to learn D or be in this forum 
are not average programmers. I don't mean that in an elitist 
sense, I mean we're weird and expecting the rest of the world to 
be like us is, IMHO, silly.


I stay up at night watching programming conference videos. Nobody 
else where I work does. Oh, and this is coming from me, a person 
who was recently told had too high expectations of what basic C++ 
knowledge should be only weeks ago.


Atila


Re: What's missing to make D2 feature complete?

2014-12-20 Thread bachmeier via Digitalmars-d

On Saturday, 20 December 2014 at 18:42:52 UTC, Vic wrote:
As a commercial user (but non contributor) of D, here is my 
suggestion:

- remove GC and memory management as default


I sure hope not. It would eat a lot of developer time, and then 
the anti-GC crowd would switch to complaining about the lack of 
tools even more than they already do. You don't need to change 
the default in order to allow language users to avoid the GC.


Re: Lost a new commercial user this week :(

2014-12-20 Thread Atila Neves via Digitalmars-d
FWIW, I'd like to thank you for taking the time (and putting up 
with some undeserved abuse for it) to write about your experience 
in trying to get other people to adopt D. For some they might be 
known issues, but the fact is these things matter, and knowing 
about them is part of fixing them.


Alas, I abandoned doing development on Windows ages ago, so it's 
not something I can help with. I don't rely on debuggers myself, 
but I do use them when it makes sense to me and it's annoying 
enough when gdb can't print things properly even with Iain's 
patched version.


As for the documentation, like others I don't mind it or have any 
problems with it, but that's not the point here. The point is 
that potential new users did, and so it needs to be improved.


Your problems are not issues I have or had; but others did and as 
much as like my Arch Linux some will prefer Windows and the 
experience has to be good for them as well.


Atila

Sorry, there were 3 guys in particular who were ripping into me 
for
whatever reason. Sorry for the blanket statement, it was 
directed at
them. I don't really feel I need to be ripped apart for trying 
to
encourage D use in the office, and then reporting on our 
experience.
You were personally supportive, so to yourself (and others), 
thanks.




Re: Lost a new commercial user this week :(

2014-12-20 Thread ketmar via Digitalmars-d
On Sat, 20 Dec 2014 20:29:34 +
Atila Neves via Digitalmars-d digitalmars-d@puremagic.com wrote:

  i'm not talking about excellent programmes, i'm talking about 
  basic
  CS knowledge. why crc32 is bad hashing function? how to 
  negate
  machine integer without using unary minus operator? what is 
  closure,
  what is continuation and how they differ? and so on. those are 
  basic
  questions, yet i was forced to add those to my interviews.
 
 I sincerely have no idea why you consider this to be basic 
 knowledge. If we're talking C programmers, the vast majority 
 won't know what a closure or a continuation is.
so they aren't programmers, they are coders. cheap expendable resource.
yet they somehow believe that they are programmers and wants
programmers' salary. but we aren't rich enough to hire coders as
programmers.

 People like us that even bother to learn D or be in this forum 
 are not average programmers. I don't mean that in an elitist 
 sense, I mean we're weird and expecting the rest of the world to 
 be like us is, IMHO, silly.
yes, there is nothing elitist in knowing the basics. and there is
nothing bad in not knowing the basics too... unless that person doesn't
pretend to know the things. we have some tasks for coders too.


signature.asc
Description: PGP signature


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Jonathan Marler via Digitalmars-d
On Saturday, 20 December 2014 at 19:57:54 UTC, ketmar via 
Digitalmars-d wrote:

On Sat, 20 Dec 2014 19:26:54 +
Jonathan Marler via Digitalmars-d digitalmars-d@puremagic.com 
wrote:


H...an interesting concept.  Make unsafe things harder to 
do to discourage their use.  I'm not sure if I agree but maybe 
you can convince me. Where do you draw the line?
the thing is that they aren't only harder to use, but they are 
also
explicitly marked as system things. i.e. things that should 
be used

only if you are *really* know what you're doing.

besides, code with `system.cast()` looks suspiciously burden, 
which

also signals that this is not a thing one should mindlessly use
everywhere.

something like that.


is it requires less maintenance. Say you have the following:

void myfunc(uint x)
{
 ubyte val;
 //
 // ...
 //
 val = cast(ubyte)x;
}

Now let's say that you need change the value of 'val' to 
ushort.


void myfunc(uint x)
{
 ushort val;
 //
 // ...
 //
 val = cast(ubyte)x;
}
why do you need `cast` here? isn't `to!ubyte` looks better? and 
it's
not only looks better, it will provide you additional overflow 
checks
too. so if you'll throw a '1024' in your unittest, the second 
version

will fail, signalling that something is going wrong.



Ah yes you are right, to!byte would make more sense because you 
check at runtime whether or not x is within the range of a byte.  
This would make the code crash right away instead of having to 
track down the bug but you still wouldn't see the bug until 
runtime, and may only happen in rare cases that only occur in the 
customer environment.  to!(typeof(val)) would also handle 
changing the type when the type of val changes.  It looks awkward 
though.  It would be nice if to handled type inference.


val = to(x);
// instead of val = to!(typeof(val)(x);
// or
val = to!auto(x); // Impossible right now since compiler doesn't 
understand to


val = to!auto(x) could be useful in some cases but this would 
be the same basic concept as cast(auto) except to allows for 
more sanity checking.  I'd have to think more about cases where 
cast(auto) would be useful but this is starting to take too much 
of my time so I'll leave that to other people if they want to 
take time to do so.  I've invested enough time into this and if 
no one else wants to support it then no big deal.  Like I said, 
this would only be a minor convenience.


Now I'm not saying that cast(auto) is good in all cases, I'm 
just trying to get you to see the big picture here.  
cast(auto) could be useful and could help prevent maintenance 
bugs in SOME cases.
i see that `cast` should NOT be used in that cases at all. and 
this

returns us to `cast` is hack. avoid the hacks!

Yes it can be misused, but I don't agree that making it harder 
to use is a good way to prevent misusage, but requiring more 
verbosity is good.  You may disagree that cast is enough 
verbosity but I think that's a matter of opinion.
sure, i'm just talking about what i see as good, i'm in no 
way trying
to tell that my opinion is the best one. sorry if my words are 
too
forcing. i'm a somewhat sarcastic person IRL, and i don't use 
English
in my everyday life, so i can sound too hard sometimes, failing 
to

properly translate my twisted style of writing. ;-)


I don't mind your style of writing, some people might be offended 
by it but it doesn't bother me.  When discussing technical 
details it takes a lot of effort to be polite and I'd rather you 
spend that effort in making the details more correct rather then 
trying to be tactful.


Re: DIP66 v1.1 (Multiple) alias this.

2014-12-20 Thread Andrei Alexandrescu via Digitalmars-d

On 11/2/14 6:57 AM, IgorStepanov wrote:

And there is dispute about is expression: see
http://forum.dlang.org/thread/ubafmwvxwtolhmnxb...@forum.dlang.org?page=5


OK, time to get this approved.

First, the current DIP doesn't seem to address this:


Walter and I would agree to making the presence of BOTH alias this
and opDispatch a compile-time error. That would break existing code
but not change semantics silently.


Any thoughts on this? Currently opDispatch gets priority over alias 
this, see lookup step 3 in section Semantics of 
http://wiki.dlang.org/DIP66. That's problematic because it puts 
opDispatch in _between_ normal subtyping via inheritance and alias 
this, which is supposed to be just as solid as inheritance.


I think the principled solution is to combine steps 2 and 4 into step 2, 
i.e. alias this is as strong as inheritance. Any ambiguous symbols would 
be rejected.


The second possibility, less principled but probably practical, would be 
to swap steps 3 and 4. That way alias this has juuust a teensy bit a 
lower status than regular inheritance.


The simplest thing (which Walter favors) is to make the presence of both 
opDispatch and alias this a compile-time error. That would break only a 
teensy amount of code if any, and would give us time to investigate the 
best approach when compelling use cases come about. So I suggest we move 
forward with that for this DIP.


Regarding the is-expression controversy in 
http://forum.dlang.org/thread/ubafmwvxwtolhmnxb...@forum.dlang.org?page=5:


First off, is(S : T) is a subtyping test - is S a non-proper subtype of 
T, or not? (Non-proper or improper subtyping: S is allowed to be 
identical to T). alias this is a mechanism that introduces subtyping. 
It follows that subtyping introduced via alias this must be detected 
with is-expressions.


Now, you give an example of subtyping where one or more two objects of 
the same supertype may be reached through two or more different paths. 
This is a well-known problem in subtyping (known as diamond hierarchy or 
repeated inheritance).


In the case of alias this, different objects of the same type may be 
reachable (or at least the compiler is unable to tell statically whether 
the objects are distinct or not). A correct but hamfisted solution would 
be to sever the subtyping relationship whenever the same type is 
reachable through multiple paths.


The versatility of alias this, however, suggests a better solution: if 
T is indirectly reachable as a supertype of S through more than one path 
and the subtyping is either tested (by means of an is-expression) or 
effected (by means of an implicit conversion), the compiler should issue 
a compile-time error asking the user to define an alias this DIRECTLY 
inside S, which takes precedence over indirect reachability and informs 
the type system which T of the several reachable ones is needed.


Please let me know of any thoughts. Thanks!


Andrei


Re: DIP69 - Implement scope for escape proof references

2014-12-20 Thread Walter Bright via Digitalmars-d

On 12/19/2014 9:44 PM, Dicebot wrote:

Such notion of view requires at least some elements of transitivity to be
practical in my opinion.


I have no idea how some elements of transitivity can even work. It's either 
transitive or its not. Please don't think of scope in terms of ownership, 
ownership is an orthogonal issue.




Also with my definition in mind your example of tree
that stores scope nodes makes absolutely no sense unless whole tree itself is
scoped (and nodes are thus scoped transitively). Such view is always assumes
worst case about ownership and shouldn't persist in any form (as that would
require some serious ownership tracking).


This is definitely conflating scope and ownership.



ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-20 Thread Walter Bright via Digitalmars-d

https://www.reddit.com/r/programming/comments/2pvf68/armv7_vs_x8664_pathfinding_benchmark_of_c_d_go/

Please take a look at this and ensure that the benchmark code is using D 
correctly.

I did notice this:

I updated the ldc D compiler earlier today (incidentally, as part of upgrading 
my system with pacman -Syu), and now it doesn't compile at all. It was 
previously compiling, and ran at around 90% the speed of C++ on ARM.


Sigh.


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-20 Thread MattCoder via Digitalmars-d
On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright 
wrote:

https://www.reddit.com/r/programming/comments/2pvf68/armv7_vs_x8664_pathfinding_benchmark_of_c_d_go/

Please take a look at this and ensure that the benchmark code 
is using D correctly...


There is already a topic about this:
http://forum.dlang.org/thread/agevpeanzbpbtcjgx...@forum.dlang.org

Matheus.


Re: What is the D plan's to become a used language?

2014-12-20 Thread Paulo Pinto via Digitalmars-d

On Saturday, 20 December 2014 at 15:14:28 UTC, Bienlein wrote:
On Saturday, 20 December 2014 at 14:06:51 UTC, Paulo Pinto 
wrote:



That is why I seldom buy into hype driven development.


Okay, so Docker is hype? Have you seen the impact of it? Every 
Java magazine has articles about Docker. And that is not 
because Java people had an interest in it, because it is 
written in Go. It is because of its business value.


Have a look at all the job offers for Go developers here: 
http://www.golangprojects.com. All those jobs are the result of 
some hype.


I wasn't talking about Go specifically, rather the adoption of 
technologies in the going up slope of the hype curve in the IT 
fashion world.


My wish for 2015...

2014-12-20 Thread Xinok via Digitalmars-d
I'm going to make a stark proposal to the you all, the community 
and all D users as whole. I wish for us to set an ultimate goal 
to be made top priority and complete by the end of next year. My 
wish is to resolve the issue of memory management for D by the 
end of 2015. This is a significant issue that has affected most 
of us at one point or another. I think this gives D a bad rap 
more than anything else and is a point of contention for many, 
especially those with a background in C/C++.


I think the problem of memory management can be reduced to two 
points:

(1) The garbage collector for D is sub-par.
(2) There are too many implicit allocations in Phobos.

I think three goals need to be met for the problem of memory 
management to be satisfied:
(1) We need a precise garbage collector. The fact that a 
garbage-collected language experiences memory leaks truly 
reflects poorly on on D.
(2) Furthermore, we need to improve the performance of the 
garbage collector. There are some things the developer can do to 
reduce the time and frequency collection cycles, but the current 
situation is far from optimal.
(3) We need a viable alternative to the garbage collection. 
Whether that be allocators, ref counting, or full-fledged manual 
memory management, there is great demand for the ability to use D 
without the GC with little hassle.


I sincerely believe that this is the greatest issue facing D 
today and something that should have been resolved a long time 
ago. The fact that relatively simple programs can crash with 
out-of-memory errors (especially 32-bit executables) and 
high-performance code experiences frequent or lengthy collection 
cycles means we have a bad situation on our hands.


Things like @nogc are a start but much more needs to be done. I'm 
not hoping for an optimal solution, nor am I expecting a 
state-of-the-art garbage collector. I think we should simply aim 
for good enough. Then once we have a better memory management 
scheme, we can begin incorporating these changes into Phobos.


What do you all think? Can we make improving memory management 
the top priority for 2015 with the goal of developing an adequate 
solution by the end of next year?


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-20 Thread bearophile via Digitalmars-d

MattCoder:


There is already a topic about this:
http://forum.dlang.org/thread/agevpeanzbpbtcjgx...@forum.dlang.org

Matheus.


And perhaps even a bug report of mine:

http://forum.dlang.org/thread/zpjjzbkwlisjemoxu...@forum.dlang.org?page=5#post-izyhysusezbidhqdncan:40forum.dlang.org

Bye,
bearophile


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-20 Thread Walter Bright via Digitalmars-d

On 12/20/2014 2:39 PM, bearophile wrote:

MattCoder:


There is already a topic about this:
http://forum.dlang.org/thread/agevpeanzbpbtcjgx...@forum.dlang.org

Matheus.


And perhaps even a bug report of mine:

http://forum.dlang.org/thread/zpjjzbkwlisjemoxu...@forum.dlang.org?page=5#post-izyhysusezbidhqdncan:40forum.dlang.org



Bug reports go into bugzilla. Reporting them in the n.g. means they'll likely 
get ignored. Of course, once in bugzilla, it's fine to make posts about it.


Re: Do everything in Java…

2014-12-20 Thread Andrei Alexandrescu via Digitalmars-d

On 12/6/14 7:26 AM, Russel Winder via Digitalmars-d wrote:

Primitive types are scheduled for removal, leaving only reference
types.


Wow, that's a biggie. Link(s)? -- Andrei


Re: DIP66 v1.1 (Multiple) alias this.

2014-12-20 Thread Joseph Rushton Wakeling via Digitalmars-d

On 02/11/14 15:55, IgorStepanov via Digitalmars-d wrote:

http://wiki.dlang.org/DIP66

I've applied some changes to it, however there are still some unresolved 
questions.


The current DIP doesn't address protection attributes.  I recognize this might 
be somewhat orthogonal, but it'd be nice to build it into the DIP if possible, 
just to be explicit about what is expected for how alias this should work.


According to TDPL the following should work:

struct Foo
{
private T internal_; // member variable is private

public alias internal_ this; // .. but can be interacted with
 // via the public alias
}

It seems to me an important factor, because it means that classes and structs 
can use subtyping without revealing the implementation details.  As things are, 
you wind up having to do something like,


struct Integer
{
private int i_;

public ref int getInteger() @property
{
return i_;
}

alias getInteger this;
}

... which personally I find a bit of an unpleasant violation of the idea of a 
private implementation.


See also: https://issues.dlang.org/show_bug.cgi?id=10996


Re: What's missing to make D2 feature complete?

2014-12-20 Thread aldanor via Digitalmars-d

- static foreach (declaration foreach)
- fixing __traits templates (eg getProtection vein extremely 
flaky, allMembers not working etc) -- seeing as ctfe is one of 
flagship features of D, it would make sense to actually make it 
work flawlessly.


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-20 Thread bearophile via Digitalmars-d

Walter Bright:

Bug reports go into bugzilla. Reporting them in the n.g. means 
they'll likely get ignored.


I'll take care of not letting it get ignored :-)

Bye,
bearophile


Re: DIP69 - Implement scope for escape proof references

2014-12-20 Thread Andrei Alexandrescu via Digitalmars-d

On 12/6/14 4:49 PM, Manu via Digitalmars-d wrote:

I need, at least, forceinline to complete it, but that one*is*
controversial - we've talked about this for years.


I'm still 883 messages behind so here's a drive-by comment - it's time 
to revisit this, I think the need has become a lot clearer. -- Andrei


Re: Invariant for default construction

2014-12-20 Thread Walter Bright via Digitalmars-d

On 11/17/2014 11:58 PM, Rainer Schuetze wrote:

I remember having an invariant on a tree structure checking consistency by
verifying the children and parent references. This crashed when adding a
destructor. With the proposed change it will always crash.

The problem is that the destructors of the tree nodes are called in arbitrary
order when they are collected by the GC. Class instances are also made invalid
after calling the destructor (the vtbl is zeroed).

I wonder if

- such invariants are invalid,
- the GC should bypass the invariant when calling the destructor
- or we should never call the invariant with the destructor?


Invariants should be checking the state of the object that it owns, not other 
objects. I would consider such an invariant invalid.


Re: BNF grammar for D?

2014-12-20 Thread Kingsley via Digitalmars-d
On Friday, 19 December 2014 at 02:53:02 UTC, Rikki Cattermole 
wrote:

On 19/12/2014 10:19 a.m., Kingsley wrote:

On Wednesday, 17 December 2014 at 21:05:05 UTC, Kingsley wrote:



Hi Bruno,

Thanks very much. I do have a couple of questions about DDT 
in

relation to my plugin.

Firstly - I'm not too familiar with parsing/lexing but at 
the moment

the Psi Structure I have implemented that comes from the DDT
parser/lexer is not in any kind of hierarchy. All the 
PsiElements are
available but all at the same level. Is this how the DDT 
parser
works? Or is it down to my implementation of the 
Parser/Lexer that

wraps it to create some hierarchy.

For intellij it's going to be vastly easier to have a 
hierarchy with
nested elements in order to get hold of a structure 
representing a
class or a function for example - in order to do things like 
get the
start and end lines of a class definition in order to apply 
code

folding and to use for searching for classes and stuff.

Secondly - how active it the development of DDT - does it 
keep up

with the D2 releases.

--Kingsley


After doing a bit more research it looks like I have to 
create the psi
hierarchy myself - my current psi structure is flat because 
I'm just
converting the DeeTokens into PsiElements directly. I've 
still got
some experimentation to do. On the plus side I implemented 
commenting,

code folding but everything else needs a psi hierarchy


I've done some more investigation and I do need to build the 
parser
myself in order to create the various constructs. I've made a 
start but
I haven't gotten very far yet because I don't fully understand 
the

correct way to proceed.

I also had a look at using the DeeParser - because it already 
does most
of what I want. However the intellij plugin wants a PsiParser 
which
returns an intellij ASTNode in the primary parse method. I 
can't see an
easy way to hook this up with DeeParser because the 
ParsedResult
although had a node method on it - gives back the wrong type 
of ASTNode.


Any pointers on how I might get the DeeParser to interface to 
an

intellij ASTNode would be appreciated.


Read my codebase again, it'll answer a lot of questions. Your 
parser is different, but what it produces shouldn't be. and yes 
it supports hierarchies.


Hi

So finally after a lot of wrestling with the internals of 
intellij I finally managed to get a working parser implementation 
that produces a psi hierarchy based on the DeeParser from the ddt 
code.


The main issue was that Intellij only wants you to create a 
parser using their toolset - which is either with a BNF grammar 
that you can then generate the parser - or with a hand written 
parser. Since I'm already using the DDT lexer and there is a 
perfectly good DDT parser as well - I just wanted to re-use the 
DDT parser.


However Intellij does not provide any way to create a custom 
AST/PSI structure or use an external parser. So I basically had 
to wrap the DeeParse inside the Intellij parser and sync them up 
programmatically. It's not the most efficient way in the world 
but it at least works.


In the long term I will write a BNF grammar for Intellij (using 
their toolkit) but I can see that will take me several months so 
this is a quick way to get the plugin up and running with all the 
power of intellij extras without spending several months stuck 
learning all about the complexities of grammar parsing and lexing.


Thanks very much for you help. Once I get a bit more of the cool 
stuff done I will release the plugin.


Re: DIP69 - Implement scope for escape proof references

2014-12-20 Thread Andrei Alexandrescu via Digitalmars-d

On 12/6/14 4:49 PM, Manu via Digitalmars-d wrote:

In the situation where templates are involved, it would be nice to be
able to make that explicit statement that some type is ref or not at
the point of template instantiation, and the resolution should work
according to the well-defined rules that we are all familiar with.


Another drive-by comment: I understand the motivation for this and the 
difficulties involved. There needs to be a clear understanding that 
adding new type qualifiers is extremely intrusive and expensive. Because 
of that, I think we should best address binding generation via a set of 
tactical tools i.e. standard library artifacts that do all that mixin 
business in an encapsulated and reusable manner.


(As an aside forcing a template instantiation to decide ref vs. no ref 
should be easy but currently can't be done, i.e. this code should work 
but currently doesn't:


T fun(T)(ref T x) { return x + 1; }

void main(string[] group)
{
int function(int) f1 = fun!int;
int function(ref int) f2 = fun!int;
}

)


Andrei



Re: Inferred Type for Explicit Cast

2014-12-20 Thread Steven Schveighoffer via Digitalmars-d

On 12/20/14 5:20 AM, Dicebot wrote:


I'd like to have a cast where you must define both from and to types
precisely.


I was actually thinking the same thing. This would be almost 
future-proof (any changes to either side would result in failed 
compilation).


-Steve


Re: Inferred Type for Explicit Cast

2014-12-20 Thread Steven Schveighoffer via Digitalmars-d

On 12/20/14 3:36 AM, Jonathan Marler wrote:


Nobody likes to use cast, but for now we are stuck with it. Creating
alternatives to cast would be a great thing to discuss but doesn't
really apply to the point at hand, which is, would cast(auto) be a
useful extension to our current cast operator?  I think it could be.  In
my opinion, if we allow return value types to be written as auto then
it makes since to have cast(auto) as well.  In both cases the developer
would need to look somewhere else to find what type auto actually gets
resolved to.


You have to be careful here, when you think about who is in charge of what.

For an auto return, it is the function author who is deciding what auto 
should resolve to. But cast(auto) is letting the author of the called 
function dictate. This is a decoupling of who is responsible for the 
type vs. who is requesting the cast.


Now, just 'auto' is fine, because you are not subverting the type 
system, and unsafe behavior cannot result. But with something like 
'cast', you are potentially playing with fire.


For instance, let's say you have a function which accepts an int, but 
the author changes it later to accept a pointer to an int. You are 
passing in a size_t, via cast(auto), now the compiler happily 
reinterprets the size_t as a pointer, and you are in dangerous territory.


You can think of it this way. With cast(T), you are saying I've 
examined the possibilities of casting this value to type T, I know what 
I'm doing. With cast(auto) you are saying I'm OK with this value 
casting to any other value that cast may work with. I know what I'm 
doing. I find that the requirement of just typing cast(auto) does not 
match the gravity of the analysis that is required to ensure that is true.


-Steve


Re: Invariant for default construction

2014-12-20 Thread Steven Schveighoffer via Digitalmars-d

On 12/20/14 7:16 PM, Walter Bright wrote:

On 11/17/2014 11:58 PM, Rainer Schuetze wrote:

I remember having an invariant on a tree structure checking
consistency by
verifying the children and parent references. This crashed when adding a
destructor. With the proposed change it will always crash.

The problem is that the destructors of the tree nodes are called in
arbitrary
order when they are collected by the GC. Class instances are also made
invalid
after calling the destructor (the vtbl is zeroed).

I wonder if

- such invariants are invalid,
- the GC should bypass the invariant when calling the destructor
- or we should never call the invariant with the destructor?


Invariants should be checking the state of the object that it owns, not
other objects. I would consider such an invariant invalid.


Wouldn't a tree own its nodes? I find the idea of a tree checking its 
nodes to ensure it's properly sorted (or maybe properly balanced) cannot 
possibly be done without actually looking at its nodes.


How do you propose one would check that invariant?

-Steve


Re: Invariant for default construction

2014-12-20 Thread Walter Bright via Digitalmars-d

On 12/20/2014 7:11 PM, Steven Schveighoffer wrote:

Wouldn't a tree own its nodes?


I've replied to this repeatedly. Think of a symbol table tree, in which symbols 
are looked up. References to found symbols are then inserted into the AST.


Building a language design that REQUIRES ownership of all references in an 
object would be cripplingly limited.




I find the idea of a tree checking its nodes to
ensure it's properly sorted (or maybe properly balanced) cannot possibly be done
without actually looking at its nodes.

How do you propose one would check that invariant?


Not using invariant() to do it. The existence of invariant() with language 
support does not mean that there aren't other ways to do it, or that invariant() 
must be universally applicable to everything.


For an analogy, constructors don't solve every creation issue - sometimes a 
factory() method is more appropriate.




Re: What's missing to make D2 feature complete?

2014-12-20 Thread Wyatt via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


Add scope and properties to that and I think we're in pretty good 
shape... personally.  Other people will have different pet issues.


-Wyatt


Re: ini library in OSX

2014-12-20 Thread Joel via Digitalmars-d-learn
On Monday, 13 October 2014 at 16:06:42 UTC, Robert burner Schadek 
wrote:

On Saturday, 11 October 2014 at 22:38:20 UTC, Joel wrote:
On Thursday, 11 September 2014 at 10:49:48 UTC, Robert burner 
Schadek wrote:

some self promo:

http://code.dlang.org/packages/inifiled


I would like an example?


go to the link and scroll down a page


How do you use it with current ini files ([label] key=name)?


Re: DUB build questions

2014-12-20 Thread Russel Winder via Digitalmars-d-learn

On Sat, 2014-12-20 at 05:46 +, Dicebot via Digitalmars-d-learn wrote:
 On Saturday, 20 December 2014 at 04:15:00 UTC, Rikki Cattermole 
 wrote:
   b) Can I do parallel builds with dub. CMake gives me Makefiles 
   so I can
   make -j does dub have a similar option?
  
  No
 
 Worth noting that it is not actually a dub problem as much, it is 
 simply not worth adding parallel builds because separate
 compilation is much much slower with existing D front-end 
 implementation and even doing it in parallel is sub-optimal
 compared to dump-it-all-at-once.

From previous rounds of this sort of question (for the SCons D 
tooling), the consensus of the community appeared to be that the only 
time separate module compilation was really useful was for mixed D, C, 
C++, Fortran systems. For pure D systems, single call of the compiler 
is deemed far better than traditional C, C++, Fortran compilation 
strategy. This means the whole make -j thing is not an issue, it 
just means that Dub is only really dealing with the all D situation.

The corollary to this is that DMD, LDC and GDC really need to make use 
of all parallelism they can, which I suspect is more or less none.

Chapel has also gone the compile all modules with a single compiler 
call strategy as this enables global optimization from source to 
executable.
  
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



Re: Derelict SDL2 library not loading on OS X

2014-12-20 Thread Mike Parker via Digitalmars-d-learn

On 12/20/2014 11:46 AM, Joel wrote:


To uninstall SDL, do I just wipe the framework, and SDL dylib?


Sorry, I can't help you there. I have no idea how things are done on 
Mac. And I think it depends on how it got there in the first place. You 
may want to take that question to the SDL mailing list (web interface at 
[1]) if no one answers here.


[1] https://forums.libsdl.org/


  1   2   >