Re: Opportunities for D

2014-08-20 Thread Ola Fosheim Gr via Digitalmars-d

On Friday, 15 August 2014 at 12:49:37 UTC, Paulo Pinto wrote:
So is the cost of trying not to have an healthy set of 
libraries as part of the standard like the other programming 
languages. Thanks to the C tradition that the language library 
is the OS.


Thankfully, the standard is now catching up and will eventually 
cover a good set of use cases in the standard library.


I think the importance of standard libraries are overrated beyond 
core building blocks for real system programming. You usually 
want to use the ADTs of the environment to avoid format 
conversions or fast domain specific solutions for performance.


If you want FFT you need to look for the best hardware library 
you can find, no language library willl be good enough. Same with 
unwrapping of complex numbers to magnitude/phase, decimation and 
a lot of other standard routines.


Libraries with no domain in mind tend to suck. So performant 
frameworks tend to roll their own.


I think phobos is aiming too wide, it would be better to focus on 
quality and performance for the core stuff based on real use and 
benchmarking. A benchmarking suite seems to be missing?


A good clean stable language and compiler is sufficient. A 
library with core building blocks that can be composed is a nice 
extra. Phobos should be more focused. Too much added and you end 
up with underperformant solutions, unmaintained code, untested 
buggy code, or weird interfaces, e.g. lowerBound() that returns 
the inverse of what the name indicates, walkLength() that does 
not take a visitor object etc.


Providing good solid interconnects / abstractions are more 
important than functionality and solutions for growing the eco 
system. In python the key interconnect feature is having solid 
language level support for lists/dicts. C++ tried iterators, but 
it is tedious to define your own and tend to be underperformant, 
so frameworks might not want to use them. D is trying ranges... 
But without benchmarks... Who knows how it fares in comparison to 
a performance oriented algorithm?




Re: Opportunities for D

2014-08-15 Thread Daniel Gibson via Digitalmars-d
Am 11.08.2014 13:02, schrieb Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com:


I think dataflow in combination with transactional memory (Haswell and
newer CPUs) could be a killer feature.


FYI: Intel TSX is not a thing anymore, it turned out to be buggy and is 
disabled by a microcode update now:

http://techreport.com/news/26911/errata-prompts-intel-to-disable-tsx-in-haswell-early-broadwell-cpus

Seems like even the upcoming Haswell-EP Xeons will have it disabled.

Cheers,
Daniel


Re: Opportunities for D

2014-08-15 Thread Paulo Pinto via Digitalmars-d
On Monday, 11 August 2014 at 15:42:17 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 11 August 2014 at 15:13:43 UTC, Russel Winder via ...

C++ is good example of the high eco system costs of trying to 
support everything, but very little out of the box. You 
basically have to select one primary framework and then try to 
shoehorn other reusable components into that framework by ugly 
layers of glue…


So is the cost of trying not to have an healthy set of libraries 
as part of the standard like the other programming languages. 
Thanks to the C tradition that the language library is the OS.


Thankfully, the standard is now catching up and will eventually 
cover a good set of use cases in the standard library.


However, there is code legacy code out there that doesn't know 
anything about ANSI standard revisions.



--
Paulo


Re: Opportunities for D

2014-08-11 Thread via Digitalmars-d
On Sunday, 10 August 2014 at 10:00:45 UTC, Russel Winder via 
Digitalmars-d wrote:
So if D got CSP, it would be me too but useful. If D got 
dataflow it
would be D the first language to support dataflow in native 
code

systems. Now that could sell.


Yes, that would be cool, but what do you mean specifically with 
dataflow? Apparently it is used to describe everything from 
tuple spaces to DSP engines.


I think dataflow in combination with transactional memory 
(Haswell and newer CPUs) could be a killer feature.


(I agree that CSP would be too much me too unless you build 
everything around it.)


Re: Opportunities for D

2014-08-11 Thread Russel Winder via Digitalmars-d
On Mon, 2014-08-11 at 11:02 +, via Digitalmars-d wrote:
[…]
 Yes, that would be cool, but what do you mean specifically with 
 dataflow? Apparently it is used to describe everything from 
 tuple spaces to DSP engines.

I guess it is true that tuple spaces can be dataflow systems, as indeed
can Excel. DSP engines are almost all dataflow exactly because signal
processing is a dataflow problem.

For me, software dataflow architecture is processes with input channels
and output channels where the each process only computes on the receipt
of data ready on some a combination of its inputs. I guess my exemplar
framework is GPars dataflow
http://www.gpars.org/1.0.0/guide/guide/dataflow.html

 I think dataflow in combination with transactional memory 
 (Haswell and newer CPUs) could be a killer feature.

Václav Pech, myself and others been discussing the role of STM but
haven't really come to a conclusion. STM is definitely a great tool for
virtual machine, framework and library developers, but it is not certain
is is a useful general applications tool.

 (I agree that CSP would be too much me too unless you build 
 everything around it.)

I disagree. Actors, dataflow and CSP are all different. Although each
can be constructed from one of the others, true, but it leads to
inefficiencies. It turns out to be better to implement all three
separately based on a lower-level set of primitives.

Technically a CSP implementation has proof obligations to be able to
claim to be CSP. As far as I am aware the only proven implementations
are current JCSP and C++CSP2.

D has the tools needed as shown by std.parallelism. If it could get
actors, CSP and dataflow then it would have something new to tell the
world about to be able to compete in the marketing stakes with Go and
Rust.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Opportunities for D

2014-08-11 Thread via Digitalmars-d
On Monday, 11 August 2014 at 15:13:43 UTC, Russel Winder via 
Digitalmars-d wrote:
For me, software dataflow architecture is processes with input 
channels
and output channels where the each process only computes on the 
receipt

of data ready on some a combination of its inputs.


Yes, but to get efficiency you need to make sure to take 
advantage of cache coherency…


I think dataflow in combination with transactional memory 
(Haswell and newer CPUs) could be a killer feature.


Václav Pech, myself and others been discussing the role of STM 
but
haven't really come to a conclusion. STM is definitely a great 
tool for
virtual machine, framework and library developers, but it is 
not certain

is is a useful general applications tool.


Really? I would think that putting TM to good use would be 
difficult without knowing the access patterns, so it would be 
more useful for engine and application developers…?


You essentially want to take advantage of a low probability of 
accessing the same cache-lines within a transaction, otherwise it 
will revert to slow locking. So you need to minimize the 
probability of concurrent access.


(I agree that CSP would be too much me too unless you build 
everything around it.)


I disagree. Actors, dataflow and CSP are all different. 
Although each

can be constructed from one of the others, true, but it leads to
inefficiencies. It turns out to be better to implement all three
separately based on a lower-level set of primitives.


I am thinking more of the eco-system. If you try to support too 
many paradigms you end up with many small islands which makes 
building applications more challenging and source code more 
difficult to read.


I think dataflow would be possible to work into the range-based 
paradigm that D libraries seems to follow.


C++ is good example of the high eco system costs of trying to 
support everything, but very little out of the box. You basically 
have to select one primary framework and then try to shoehorn 
other reusable components into that framework by ugly layers of 
glue…


Re: Opportunities for D

2014-08-10 Thread Bienlein via Digitalmars-d
I think Walter is exactly right with the first 7 points he is 
listing in his starting post of this thread. Nullable types are 
nice, but don't get too much distracted by them. The first 7 
points are far more important. Go takes absolutely no effort to 
get rid of nil and they are very successful in despite of this 
nil thing.


IMHO goroutines and channels is really the key. D might be a 
better C++. But languages need a use case to make people change. 
I don't see why D can't do for cloud computing and concurrent 
server-side software what Go is doing. Go's GC is also not that 
advanced, but it is precise so 24/7 is not a problem. Making the 
D GC precise is more important than making it faster.


Actually, you get the strange situation now that to make D a 
language for the cloud a quick approach would be to make 
everything GC and let people have pointers as well as in Go. Of 
course, no good approach for the long run prospects of D. But let 
all memory management be handled by the GC should remain easy in 
D. Otherwise, D will be for the systems people only as with Rust.


Much of the froth about Go is dismissed by serious developers, 
but they nailed the goroutine thing. It's Go's killer feature.


I think so, too. Along with channels and channel selects to 
coordinate all those goroutines and exchange data between them. 
Without them goroutines would be pointless except for doing 
things in parallel. I'm not sure you can do selects in the 
library with little lock contention, but I'm not an expert on 
this.


Think of it from the perspective of attracting Erlang 
programmers, or Java/Scala programmers who use Akka.


Not wanting to be rude, but you don't stand a chance with that. 
Java has Hadoop, MongoDB, Hazelcast, Akka, Scala, Cassandra and 
MUCH more. No way you can beat all that. Hordes of average Java 
developers that will be against you, because they know Java and 
nothing else and don't want to loose their status.


But Go also does not have these things. It's success is huge, 
though, and it seems mostly to be attributed to goroutines and 
channels. This made Go the language for the cloud (at least 
other people say so), which is what there is a need for now. 
Other than that Go is drop dead simple. You can start coding now 
and start your cloud software start-up now. There is nothing 
complicated you need to learn. D cannot compete with that (thank 
goodness it is also no minimalistic language like Go).


Akka similarly uses its own lightweight threads, not heavyweight 
JVM threads.


Akka uses some approach like Apple's Grand Central Dispatch. As I 
understand it so does vibe.d (using libevent). A small number of 
threads is serving queues to which tasks are added. This works 
fine as long as those tasks are short runners. You can have 
50.000 long runners in Go. As long as they aren't all active the 
system is well responsive. You can't have 50.000 long-runners in 
Akka, because they would block all kernel threads that serve the 
task queues. The 50.000 and first long running task will have to 
wait a long time till it will be served. This is why they have 
special worker threads in Vert.x for Java: threads that are 
reserved for long-runners (and you can't have many of them).







Re: Opportunities for D

2014-08-10 Thread Russel Winder via Digitalmars-d
On Sun, 2014-08-10 at 09:27 +, Bienlein via Digitalmars-d wrote:
[…]
 IMHO goroutines and channels is really the key. D might be a 
 better C++. But languages need a use case to make people change. 

From a marketing perspective, Go introduced goroutines (which is an
implementation of a minor variant of CSP more or less), Rust introduces
lots of things about memory management, references, etc. C and C++ have
none of these. What does D bring to the field that is new today so that
it can be used as a marketing tool?

[…]
 But Go also does not have these things. It's success is huge, 
 though, and it seems mostly to be attributed to goroutines and 
 channels. This made Go the language for the cloud (at least 
 other people say so), which is what there is a need for now. 
 Other than that Go is drop dead simple. You can start coding now 
 and start your cloud software start-up now. There is nothing 
 complicated you need to learn. D cannot compete with that (thank 
 goodness it is also no minimalistic language like Go).

The core point about Go is goroutines: it means you don't have to do all
this event loop programming and continuations stuff à la Node, Vert.x,
Vibe.d, you can use processes and channels and the scheduling is handled
at the OS level. No more shared memory stuff.

OK so all this event loop, asyncio stuff is hip and cool, but as soon as
you have to do something that is not zero time wrt event arrival, it all
gets messy and complicated. (Over simplification, but true especially in
GUI programming.)

Go is otherwise a trimmed down C and so trivial (which turns out to be a
good thing) but it also has types, instances and extension methods which
are new and shiny and cool (even though they are not new nor shiny).
These new things capture hearts and minds and create new active
communities.

It is true that Go is a walled garden approach to software, the whole
package and executable management system is introvert and excludes. But
it creates a space in which people can work without distraction. 

Dub has the potential to do for D what Go's package system and import
from DVCS repositories has done, and that is great. But it is no longer
new. D is just a me too language in that respect.

 Akka similarly uses its own lightweight threads, not heavyweight 
 JVM threads.
 
 Akka uses some approach like Apple's Grand Central Dispatch. As I 
 understand it so does vibe.d (using libevent). A small number of 
 threads is serving queues to which tasks are added. This works 
 fine as long as those tasks are short runners. You can have 
 50.000 long runners in Go. As long as they aren't all active the 
 system is well responsive. You can't have 50.000 long-runners in 
 Akka, because they would block all kernel threads that serve the 
 task queues. The 50.000 and first long running task will have to 
 wait a long time till it will be served. This is why they have 
 special worker threads in Vert.x for Java: threads that are 
 reserved for long-runners (and you can't have many of them).

And Erlang. And GPars. And std.parallelism. It is the obviously sensible
approach to management of multiple activities. D brings nothing new on
this front.

What no native code language (other than C++ in Anthony Williams'
Just::Thread Pro in vestigial form) has is dataflow. This is going to be
big in JVM-land fairly soon (wel actually it already is but no-one is
talking about it much because of commercial vested interests)

So if D got CSP, it would be me too but useful. If D got dataflow it
would be D the first language to support dataflow in native code
systems. Now that could sell.



-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-15 Thread Jacob Carlborg via Digitalmars-d

On 14/07/14 18:16, H. S. Teoh via Digitalmars-d wrote:


Mine is here:

http://wiki.dlang.org/User:Quickfur/DIP_scope


From the DIP:

The 'scope' keyword has been around for years, yet it is barely 
implemented and it's unclear just what it's supposed to mean


I don't know if it's worth clarify but scope currently has various 
features.


1. Allocate classes on the stack: scope bar = new Bar()
2. Forcing classes to be allocated on the stack: scope class Bar {}
3. The scope-statement: scope(exit) file.close()
4. Scope parameters. This is the part that is unclear what is means/is 
supposed to mean in the current language


--
/Jacob Carlborg


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-15 Thread simendsjo via Digitalmars-d
On 07/15/2014 08:42 AM, Jacob Carlborg wrote:
 On 14/07/14 18:16, H. S. Teoh via Digitalmars-d wrote:
 
 Mine is here:

 http://wiki.dlang.org/User:Quickfur/DIP_scope
 
 From the DIP:
 
 The 'scope' keyword has been around for years, yet it is barely
 implemented and it's unclear just what it's supposed to mean
 
 I don't know if it's worth clarify but scope currently has various
 features.
 
 1. Allocate classes on the stack: scope bar = new Bar()
 2. Forcing classes to be allocated on the stack: scope class Bar {}
 3. The scope-statement: scope(exit) file.close()
 4. Scope parameters. This is the part that is unclear what is means/is
 supposed to mean in the current language
 

Isn't both 1 and 2 deprecated?


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-15 Thread Jacob Carlborg via Digitalmars-d

On 15/07/14 01:48, H. S. Teoh via Digitalmars-d wrote:


Yes, but since the extent of this scope is unknown from inside the
function body, it doesn't easily lend itself nicely to check things like
this:

int* ptr;
void func(scope int* arg) {
ptr = arg; // should this be allowed?
}

If we only know that 'arg' has a longer lifetime than func, but we don't
know how long it is, then we don't know if it has the same lifetime as
'ptr', or less. So it doesn't really let us do useful checks.


I was thinking that arg would have at least the same lifetime as the 
caller, i.e. the same as ptr.


--
/Jacob Carlborg


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-15 Thread Jacob Carlborg via Digitalmars-d

On 15/07/14 08:46, simendsjo wrote:


Isn't both 1 and 2 deprecated?


Depends on what you mean by deprecated. People are keep saying that 
but it's not. Nothing, except for people saying that, indicates that. No 
deprecation message, no warning, nothing about it in the documentation. 
Even if/when that will be deprecated that it's not unclear what it does.


--
/Jacob Carlborg


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-15 Thread Adam D. Ruppe via Digitalmars-d

On Tuesday, 15 July 2014 at 06:42:20 UTC, Jacob Carlborg wrote:

1. Allocate classes on the stack: scope bar = new Bar()
4. Scope parameters. This is the part that is unclear what is 
means/is supposed to mean in the current language


These are actually the same thing: if something is stack 
allocated, it must not allow the reference to escape to remain 
memory safe... and if the reference is not allowed to escape, 
stack allocating the object becomes an obvious automatic 
optimization.


People keep calling them deprecated but they really aren't - the 
escape analysis to make it memory safe just isn't implemented.


2. Forcing classes to be allocated on the stack: scope class 
Bar {}


I think this is the same thing too, just on the class instead of 
the object, but I wouldn't really defend this feature, even if 
implemented correctly, since ALL classes really ought to be scope 
compatible if possible to let the user decide on their lifetime.


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-15 Thread Jacob Carlborg via Digitalmars-d

On 15/07/14 14:47, Adam D. Ruppe wrote:


These are actually the same thing: if something is stack allocated, it
must not allow the reference to escape to remain memory safe... and if
the reference is not allowed to escape, stack allocating the object
becomes an obvious automatic optimization.

People keep calling them deprecated but they really aren't - the escape
analysis to make it memory safe just isn't implemented.


Yes, I agree.


I think this is the same thing too, just on the class instead of the
object, but I wouldn't really defend this feature, even if implemented
correctly, since ALL classes really ought to be scope compatible if
possible to let the user decide on their lifetime.


If a class is allocated on the stack, its destructor will be called (at 
least according to the spec). If you declare a class scope you know it 
will always be allocated on the stack and can take advantage of that. 
Even if all classes are scope compatible some might _only_ be 
compatible with scope.


--
/Jacob Carlborg


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-15 Thread H. S. Teoh via Digitalmars-d
On Tue, Jul 15, 2014 at 09:19:34AM +0200, Jacob Carlborg via Digitalmars-d 
wrote:
 On 15/07/14 01:48, H. S. Teoh via Digitalmars-d wrote:
 
 Yes, but since the extent of this scope is unknown from inside the
 function body, it doesn't easily lend itself nicely to check things
 like this:
 
  int* ptr;
  void func(scope int* arg) {
  ptr = arg; // should this be allowed?
  }
 
 If we only know that 'arg' has a longer lifetime than func, but we
 don't know how long it is, then we don't know if it has the same
 lifetime as 'ptr', or less. So it doesn't really let us do useful
 checks.
 
 I was thinking that arg would have at least the same lifetime as the
 caller, i.e. the same as ptr.
[...]

But what if 'ptr' is declared in a private binary-only module, and only
the signature of 'func' is known? Then what should 'scope' mean to the
compiler when 'func' is being called from another module?


T

-- 
ASCII stupid question, getty stupid ANSI.


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-15 Thread Jacob Carlborg via Digitalmars-d

On 2014-07-15 16:58, H. S. Teoh via Digitalmars-d wrote:


But what if 'ptr' is declared in a private binary-only module, and only
the signature of 'func' is known? Then what should 'scope' mean to the
compiler when 'func' is being called from another module?


Hmm, I didn't think of that :(

--
/Jacob Carlborg


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-14 Thread Jacob Carlborg via Digitalmars-d

On 13/07/14 16:37, H. S. Teoh via Digitalmars-d wrote:


We could, but how would that help static analysis within the function's
body, since the caller's scope is unknown?


Won't the caller's scope always outlive the callee's?

--
/Jacob Carlborg


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-14 Thread H. S. Teoh via Digitalmars-d
On Sun, Jul 13, 2014 at 02:58:27PM +, via Digitalmars-d wrote:
 On Friday, 11 July 2014 at 22:03:37 UTC, H. S. Teoh via Digitalmars-d wrote:
 Maybe what we should do, is to have everyone post their current
 (probably incomplete) drafts of what scope should do, so that we have
 everything on the table and we can talk about what should be kept,
 what should be discarded, etc.. It may be, that the best design is
 not what any one of us has right now, but some combination of
 multiple current proposals.
 
 I've just done so for mine:
 http://wiki.dlang.org/User:Schuetzm/scope

Mine is here:

http://wiki.dlang.org/User:Quickfur/DIP_scope


T

-- 
Turning your clock 15 minutes ahead won't cure lateness---you're just making 
time go faster!


Re: Opportunities for D

2014-07-14 Thread Dicebot via Digitalmars-d

On Sunday, 13 July 2014 at 18:20:12 UTC, Walter Bright wrote:

On 7/13/2014 4:05 AM, Dicebot wrote:
Also I was not speaking originally about all good pull 
requests just waiting
to be merged but about stuff that hits some controversial 
language/Phobos parts
and requires some decision if it can be accepted at all. 
Pretty much no one but
you can make such judgement even if there are many people with 
merge rights.


It is probably more of an issue for DMD than Phobos because 
almost any

enhancement for language needs to get your approval or go away.


That is true, but Andrei and I also rely heavily on feedback 
from you guys about those issues. For example, Sean's 
std.concurrency pull. I gotta rely on reviews from you fellows, 
because I am hardly an expert on best practices for concurrency 
programming. In fact I'm rather a newbie at it.


I will keep an eye std.concurrency but I what should I do to 
convince you say a word or two about DMD PR that makes some 
language change? :)


Re: Opportunities for D

2014-07-14 Thread Walter Bright via Digitalmars-d

On 7/14/2014 9:56 AM, Dicebot wrote:

I will keep an eye std.concurrency but I what should I do to convince you say a
word or two about DMD PR that makes some language change? :)


I still believe that Andrei  I need to approve language change PRs. These can 
be very disruptive and not easily reverted if they aren't right. That said, we 
still strongly rely on feedback on these from the community.


For example,

  https://issues.dlang.org/show_bug.cgi?id=11946


Re: Opportunities for D

2014-07-14 Thread Dicebot via Digitalmars-d

On Monday, 14 July 2014 at 18:30:34 UTC, Walter Bright wrote:

On 7/14/2014 9:56 AM, Dicebot wrote:
I will keep an eye std.concurrency but I what should I do to 
convince you say a

word or two about DMD PR that makes some language change? :)


I still believe that Andrei  I need to approve language change 
PRs. These can be very disruptive and not easily reverted if 
they aren't right. That said, we still strongly rely on 
feedback on these from the community.


For example,

  https://issues.dlang.org/show_bug.cgi?id=11946


I mean something like this : 
https://github.com/D-Programming-Language/dmd/pull/3651 - change 
that was implemented, generally approved in NG discussion, 
adjusted to all review comments etc but still doesn't not have 
even single comment from you or Andrei if it is even has chance 
for being accepted. Just a short Thinking, not sure is much 
less discouraging than no comments at all.


Re: Opportunities for D

2014-07-14 Thread Andrei Alexandrescu via Digitalmars-d

On 7/14/14, 12:10 PM, Dicebot wrote:

On Monday, 14 July 2014 at 18:30:34 UTC, Walter Bright wrote:

On 7/14/2014 9:56 AM, Dicebot wrote:

I will keep an eye std.concurrency but I what should I do to convince
you say a
word or two about DMD PR that makes some language change? :)


I still believe that Andrei  I need to approve language change PRs.
These can be very disruptive and not easily reverted if they aren't
right. That said, we still strongly rely on feedback on these from the
community.

For example,

  https://issues.dlang.org/show_bug.cgi?id=11946


I mean something like this :
https://github.com/D-Programming-Language/dmd/pull/3651 - change that
was implemented, generally approved in NG discussion, adjusted to all
review comments etc but still doesn't not have even single comment from
you or Andrei if it is even has chance for being accepted. Just a short
Thinking, not sure is much less discouraging than no comments at all.


Good example. Let me look into it! -- Andrei


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-14 Thread H. S. Teoh via Digitalmars-d
On Mon, Jul 14, 2014 at 10:41:10AM +0200, Jacob Carlborg via Digitalmars-d 
wrote:
 On 13/07/14 16:37, H. S. Teoh via Digitalmars-d wrote:
 
 We could, but how would that help static analysis within the
 function's body, since the caller's scope is unknown?
 
 Won't the caller's scope always outlive the callee's?
[...]

Yes, but since the extent of this scope is unknown from inside the
function body, it doesn't easily lend itself nicely to check things like
this:

int* ptr;
void func(scope int* arg) {
ptr = arg; // should this be allowed?
}

If we only know that 'arg' has a longer lifetime than func, but we don't
know how long it is, then we don't know if it has the same lifetime as
'ptr', or less. So it doesn't really let us do useful checks.


T

-- 
I suspect the best way to deal with procrastination is to put off the 
procrastination itself until later. I've been meaning to try this, but haven't 
gotten around to it yet.  -- swr


Re: Opportunities for D

2014-07-14 Thread Walter Bright via Digitalmars-d

On 7/14/2014 12:10 PM, Dicebot wrote:

On Monday, 14 July 2014 at 18:30:34 UTC, Walter Bright wrote:

For example,

  https://issues.dlang.org/show_bug.cgi?id=11946


I mean something like this :
https://github.com/D-Programming-Language/dmd/pull/3651 - change that was
implemented, generally approved in NG discussion, adjusted to all review
comments etc but still doesn't not have even single comment from you or Andrei
if it is even has chance for being accepted. Just a short Thinking, not sure
is much less discouraging than no comments at all.


At the moment my focus is to get 2.066 out. If I don't, it'll just drift on and 
never happen. 11946 is one of the problem areas.


Re: Opportunities for D

2014-07-13 Thread Walter Bright via Digitalmars-d

On 7/10/2014 5:54 AM, Dicebot wrote:

No one but Walter / Andrei can do anything about it. Right now we are in weird
situation when they call for lieutenants but are not ready to abandon decision
power. It can't possibly work that way. No amount of volunteer effort will help
when so many PR stall waiting for resolution comment from one of language
generals.


Here are the teams with Pulling Power:

  https://github.com/orgs/D-Programming-Language/teams

Team Phobos, for example, has 25 members. Including you!




Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-13 Thread Jacob Carlborg via Digitalmars-d

On 2014-07-11 16:29, H. S. Teoh via Digitalmars-d wrote:


Because the scope of the parameter 'obj' is defined to be the scope of
myFunc only, according to the current proposal.


Wouldn't it be possible to define the scope of a parameter to the 
caller's scope?


--
/Jacob Carlborg


Re: Opportunities for D

2014-07-13 Thread Dicebot via Digitalmars-d

On Sunday, 13 July 2014 at 07:18:53 UTC, Walter Bright wrote:

On 7/10/2014 5:54 AM, Dicebot wrote:
No one but Walter / Andrei can do anything about it. Right now 
we are in weird
situation when they call for lieutenants but are not ready 
to abandon decision
power. It can't possibly work that way. No amount of volunteer 
effort will help
when so many PR stall waiting for resolution comment from one 
of language

generals.


Here are the teams with Pulling Power:

  https://github.com/orgs/D-Programming-Language/teams

Team Phobos, for example, has 25 members. Including you!


You do realize that Andrei has added me to that list exactly 
after this message I have posted to shut me up? :grumpy:


Re: Opportunities for D

2014-07-13 Thread Dicebot via Digitalmars-d

On Sunday, 13 July 2014 at 07:18:53 UTC, Walter Bright wrote:

On 7/10/2014 5:54 AM, Dicebot wrote:
No one but Walter / Andrei can do anything about it. Right now 
we are in weird
situation when they call for lieutenants but are not ready 
to abandon decision
power. It can't possibly work that way. No amount of volunteer 
effort will help
when so many PR stall waiting for resolution comment from one 
of language

generals.


Here are the teams with Pulling Power:

  https://github.com/orgs/D-Programming-Language/teams

Team Phobos, for example, has 25 members. Including you!


Also I was not speaking originally about all good pull requests 
just waiting to be merged but about stuff that hits some 
controversial language/Phobos parts and requires some decision if 
it can be accepted at all. Pretty much no one but you can make 
such judgement even if there are many people with merge rights.


It is probably more of an issue for DMD than Phobos because 
almost any enhancement for language needs to get your approval or 
go away.


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-13 Thread H. S. Teoh via Digitalmars-d
On Sun, Jul 13, 2014 at 12:07:58PM +0200, Jacob Carlborg via Digitalmars-d 
wrote:
 On 2014-07-11 16:29, H. S. Teoh via Digitalmars-d wrote:
 
 Because the scope of the parameter 'obj' is defined to be the scope
 of myFunc only, according to the current proposal.
 
 Wouldn't it be possible to define the scope of a parameter to the
 caller's scope?
[...]

We could, but how would that help static analysis within the function's
body, since the caller's scope is unknown?


T

-- 
Truth, Sir, is a cow which will give [skeptics] no more milk, and so they are 
gone to milk the bull. -- Sam. Johnson


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-13 Thread via Digitalmars-d
On Friday, 11 July 2014 at 22:03:37 UTC, H. S. Teoh via 
Digitalmars-d wrote:

Maybe what we should do, is to have everyone post their current
(probably incomplete) drafts of what scope should do, so that 
we have
everything on the table and we can talk about what should be 
kept, what
should be discarded, etc.. It may be, that the best design is 
not what
any one of us has right now, but some combination of multiple 
current

proposals.


I've just done so for mine:
http://wiki.dlang.org/User:Schuetzm/scope


Re: Opportunities for D

2014-07-13 Thread Walter Bright via Digitalmars-d

On 7/13/2014 3:47 AM, Dicebot wrote:

On Sunday, 13 July 2014 at 07:18:53 UTC, Walter Bright wrote:

On 7/10/2014 5:54 AM, Dicebot wrote:

No one but Walter / Andrei can do anything about it. Right now we are in weird
situation when they call for lieutenants but are not ready to abandon decision
power. It can't possibly work that way. No amount of volunteer effort will help
when so many PR stall waiting for resolution comment from one of language
generals.


Here are the teams with Pulling Power:

  https://github.com/orgs/D-Programming-Language/teams

Team Phobos, for example, has 25 members. Including you!


You do realize that Andrei has added me to that list exactly after this message
I have posted to shut me up? :grumpy:


No, I didn't realize that, thanks for letting me know. But there are still 24 
other members, which is a lot more than just Andrei and I.




Re: Opportunities for D

2014-07-13 Thread Walter Bright via Digitalmars-d

On 7/13/2014 4:05 AM, Dicebot wrote:

Also I was not speaking originally about all good pull requests just waiting
to be merged but about stuff that hits some controversial language/Phobos parts
and requires some decision if it can be accepted at all. Pretty much no one but
you can make such judgement even if there are many people with merge rights.

It is probably more of an issue for DMD than Phobos because almost any
enhancement for language needs to get your approval or go away.


That is true, but Andrei and I also rely heavily on feedback from you guys about 
those issues. For example, Sean's std.concurrency pull. I gotta rely on reviews 
from you fellows, because I am hardly an expert on best practices for 
concurrency programming. In fact I'm rather a newbie at it.


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-12 Thread via Digitalmars-d
On Friday, 11 July 2014 at 21:04:05 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Thu, Jul 10, 2014 at 08:10:36PM +, via Digitalmars-d 
wrote:
Hmm. Seems that you're addressing a somewhat wider scope than 
what I had
in mind. I was thinking mainly of 'scope' as does not escape 
the body
of this block, but you're talking about a more general case of 
being

able to specify explicit lifetimes.



Indeed, but it includes what you're suggesting. For most use 
cases, just `scope` without an explicit lifetime annotation is 
fully sufficient.



[...]
A problem that has been discussed in a few places is safely 
returning
a slice or a reference to an input parameter. This can be 
solved

nicely:

scope!haystack(string) findSubstring(
scope string haystack,
scope string needle
);

Inside `findSubstring`, the compiler can make sure that no 
references
to `haystack` or `needle` can be escape (an unqualified 
`scope` can be
used here, no need to specify an owner), but it will allow 
returning
a slice from it, because the signature says: The return value 
will

not live longer than the parameter `haystack`.


This does seem to be quite a compelling argument for explicit 
scopes. It

does make it more complex to implement, though.


[...]
An interesting application is the old `byLine` problem, where 
the
function keeps an internal buffer which is reused for every 
line that
is read, but a slice into it is returned. When a user naively 
stores
these slices in an array, she will find that all of them have 
the same

content, because they point to the same buffer. See how this is
avoided with `scope!(const ...)`:


This seems to be something else now. I'll have to think about 
this a bit
more, but my preliminary thought is that this adds yet another 
level of
complexity to 'scope', which is not necessarily a bad thing, 
but we

might want to start out with something simpler first.


It's definitely an extension and not as urgently necessary, 
although it fits well into the general topic of borrowing: 
`scope` by itself provides mutable borrowing, but `scope!(const 
...)` provides const borrowing, in the sense that another object 
temporarily takes ownership of the value, so that the original 
owner can only read the object until it is returned by the 
borrowed value going out of scope. I mentioned it here because it 
seemed to be an easy extension that could solve an interesting 
long-standing problem for which we only have workarounds today 
(`byLineCopy` IIRC).


And I have to add that it's not completely thought out yet. For 
example, might it make sense to have `scope!(immutable ...)`, 
`scope!(shared ...)`, and if yes, what would they mean...





[...]
An open question is whether there needs to be an explicit 
designation
of GC'd values (for example by `scope!static` or `scope!GC`), 
to say
that a given values lives as long as it's needed (or 
forever).


Shouldn't unqualified values already serve this purpose?




Likely yes. It might however be useful to contemplate, especially 
with regards to allocators.



[...]

Now, for the problems:

Obviously, there is quite a bit of complexity involved. I can 
imagine
that inferring the scope for templates (which is essential, 
just as

for const and the other type modifiers) can be complicated.


I'm thinking of aiming for a design where the compiler can 
infer all
lifetimes automatically, and the user doesn't have to. I'm not 
sure if
this is possible, but based on what Walter said, it would be 
best if we
infer as much as possible, since users are lazy and are 
unlikely to be
thrilled at the idea of having to write additional annotations 
on their

types.


I agree. It's already getting ugly with `const pure nothrow @safe 
@nogc`, adding another annotation should not be done 
lightheartedly. However, if the compiler could infer all the 
lifetimes (which I'm quite sure isn't possible, see the 
haystack-needle example), I don't see why we'd need `scope` at 
all. It would at most be a way not to break backward 
compatibility, but that would be another case where you could say 
that D has it backwards, like un-@safe by default...




My original proposal was aimed at this, that's why I didn't put 
in
explicit lifetimes. I was hoping to find a way to define things 
such
that the lifetime is unambiguous from the context in which 
'scope' is
used, so that users don't ever have to write anything more than 
that.
This also makes the compiler's life easier, since we don't have 
to keep
track of who owns what, and can just compute the lifetime from 
the
surrounding context. This may require sacrificing some 
precision in
lifetimes, but if it helps simplify things while still giving 
adequate

functionality, I think it's a good compromise.


I agree it looks a bit intimidating at first glance, but as far 
as I can tell it should be relatively straightforward to 
implement. I'll explain how I think it could be done:


The obvious things: The parser needs 

Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-12 Thread via Digitalmars-d
On Friday, 11 July 2014 at 22:03:37 UTC, H. S. Teoh via 
Digitalmars-d wrote:
Along these lines, I'm wondering if turtles all the way down 
is the
wrong way of looking at it. Consider, for example, an n-level 
deep
nesting of aggregates. If obj.nest1 is const, then 
obj.nest1.nest2.x
must also be const, because otherwise we break the const 
system. So
const is transitive downwards. But if obj.nest1 is a scoped 
reference
type with lifetime L1, that doesn't necessarily mean 
obj.nest1.y only
has lifetime L1. It may be a pointer that points to an infinite 
lifetime
object, for example, so it's not a problem that the pointer 
goes out of
scope before the object pointed to. OTOH, if obj.nest1 has 
scope L1,
then obj itself cannot have a longer lifetime than L1, 
otherwise we may

access obj.nest1 after its lifetime is over. So the lifetime of
obj.nest1 must propagate *upwards* (or outwards).


I'm not so sure about transitivity either, although I started 
with it. One reason that `const`, `immutable` and `shared` need 
to be transitive is that we can then use this fact to infer other 
properties from it, e.g. thread-safety. I don't really see such 
advantages for `scope`, but instead it would make handling 
`scope` in aggregates extremely complicated. And it wouldn't make 
much sense either IMO, because aggregates are usually defined 
somewhere else than where they are used, and often by a different 
author, too. Therefore, they come with their own ownership 
strategies, and it's simply not possible to force a different one 
onto them from the outside. For this reason, I now tend to 
intransitivity.


There also something else that became clear to me: If an 
important use case is found that requires transitivity, nothing 
is really lost. We know all the types that are involved, and we 
can check whether all references contained in them are marked as 
`scope` using introspection, even without additional compiler 
support beyond a simple trait, just as we can today check for 
things like `hasUnsharedAliasing`! Therefore, we wouldn't even 
close the doors for further improvements if we decide for 
intransitivity.


Re: Opportunities for D

2014-07-12 Thread Walter Bright via Digitalmars-d

On 7/10/2014 10:53 PM, deadalnix wrote:

Most of them never gathered any attention.


Sometimes, when the idea is right, you still need to get behind and push it. 
Build it and they will come is a stupid hollywood fantasy.


I've also written DIPs, which garnered zero comments. I implemented them, and 
the PR's sat there for some time, until I finally harangued some people via 
email to get them pulled.


I'm not complaining, I'm just saying that's just how it is. The DIPs were for 
things that looked obscure and were technically complex, a surefire recipe for a 
collective yawn even though I knew they were crucial (inferring uniqueness).


If you're looking for lots of comments, start a nice bikeshedding thread about 
whitespace conventions :-)




Re: Opportunities for D

2014-07-12 Thread Johannes Pfau via Digitalmars-d
Am Sat, 12 Jul 2014 13:27:26 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 On 7/10/2014 10:53 PM, deadalnix wrote:
  Most of them never gathered any attention.
 
 Sometimes, when the idea is right, you still need to get behind and
 push it. Build it and they will come is a stupid hollywood fantasy.
 
 I've also written DIPs, which garnered zero comments. I implemented
 them, and the PR's sat there for some time, until I finally harangued
 some people via email to get them pulled.
 
 I'm not complaining, I'm just saying that's just how it is. The DIPs
 were for things that looked obscure and were technically complex, a
 surefire recipe for a collective yawn even though I knew they were
 crucial (inferring uniqueness).
 
 If you're looking for lots of comments, start a nice bikeshedding
 thread about whitespace conventions :-)
 

But you've got some nice bonus:
If somebody doesn't like your pull request you can just merge it anyway.

But if you veto something the only one who can probably merge anyway is
Andrei.


Re: Opportunities for D

2014-07-12 Thread Andrei Alexandrescu via Digitalmars-d

On 7/12/14, 1:38 PM, Johannes Pfau wrote:

But you've got some nice bonus:
If somebody doesn't like your pull request you can just merge it anyway.


That hasn't happened in a really long time, and last time it did is 
before we had due process in place.



But if you veto something the only one who can probably merge anyway is
Andrei.


Such situations are best resolved by building consensus and a shared vision.


Andrei



Re: Opportunities for D

2014-07-12 Thread Andrei Alexandrescu via Digitalmars-d

On 7/12/14, 1:27 PM, Walter Bright wrote:

On 7/10/2014 10:53 PM, deadalnix wrote:

Most of them never gathered any attention.


Sometimes, when the idea is right, you still need to get behind and push
it. Build it and they will come is a stupid hollywood fantasy.

I've also written DIPs, which garnered zero comments. I implemented
them, and the PR's sat there for some time, until I finally harangued
some people via email to get them pulled.

I'm not complaining, I'm just saying that's just how it is.


Indeed that's how it is. It's also a quality issue - we can reasonably 
assume that a perfect slam-dunk DIP would be easily recognized; many of 
the current DIPs (including mine) need work and dedication, which is 
more difficult to find.


Andrei




Re: Opportunities for D

2014-07-12 Thread Walter Bright via Digitalmars-d

On 7/12/2014 1:38 PM, Johannes Pfau wrote:

But you've got some nice bonus:
If somebody doesn't like your pull request you can just merge it anyway.


I'd only do that in an emergency. I'll also just pull the ones for D1.



But if you veto something the only one who can probably merge anyway is
Andrei.


Andrei and I don't always agree, but we've not gone around overriding each 
other.



Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-11 Thread deadalnix via Digitalmars-d
I'm bugging around with a similar proposal for a while, but quite 
fail to put the last pieces in place. It goes similar in


On Thursday, 10 July 2014 at 17:04:24 UTC, H. S. Teoh via 
Digitalmars-d wrote:
   - For function parameters, this lifetime is the scope of the 
function

 body.


Some kind of inout scope seem less limiting. The caller know the 
scope, the callee know that is is greater than itself. It is 
important as local variable in the outer scope of the function 
have more restricted scope and must not be assignable.


Each parameter have a DIFFERENT lifetime, but it is impossible to 
tell which one is larger from the callee perspective. Thus you 
must have a more complex lifetime definition than grater/smaller 
lifetime. Yup, when you get into the details, quantum effects 
start to arise.


   - An unscoped variable is regarded to have infinite 
lifetime.




So it is not unscoped, but I'm simply nitpicking on that one.

  - Since a scoped return type has its lifetime as part of 
its type,
the type system ensures that scoped values never 
escape their
lifetime. For example, if we are sneaky and return a 
pointer to
an inner function, the type system will prevent 
leakage of the


This get quite tricky to define when you can have both this and a 
context pointer. Once again, you get into a situation where you 
have 2 non sortable lifetime to handle. And worse, you'll be 
creating values out of that mess :)



- Aggregates:

   - It's turtles all the way down: members of scoped 
aggregates also

 have scoped type, with lifetime inherited from the parent
 aggregate. In other words, the lifetime of the aggregate 
is

 transitive to the lifetime of its members.


Yes rule for access is transitivity. But the rule to write is 
antitransitive. It gets tricky when you consider that a member 
variable may have to be able to extend the lifetime of one of 
its member.


IE a member of lifetime B in a value of lifetime A sees it 
lifetime becoming max(A, B). Considering lifetime aren't always 
sortable (as show in 2 examples), this is tricky.


This basically means that you have to define what happen for non 
sortable lifetime, and what happen for union/intersection of 
lifetime. As you see, I've banged my head quite a lot on that 
one. I'm fairly confident that this is solvable, but definitively 
require a lot of effort to iron out all the details.


- Passing parameters: since unscoped values are regarded to 
have
  infinite lifetime, it's OK to pass unscoped values into 
scoped
  function parameters: it's a narrowing of lifetime of the 
original
  value, which is allowed. (What's not allowed is expanding 
the lifetime

  of a scoped value.)



Get rid of the whole concept of unscopped, and you get rid of a 
whole class of redundant definition that needs to be done.


I'm sure there are plenty of holes in this proposal, so 
destroy away.

;-)



Need some more iron. But I'm happy to see that some people came 
up with proposal that are close to what I had in mind.


The above mentioned detail may seems scary, but I'm confident 
these will only cause problems in a small variety of cases.


An aspect of the proposal that isn't mentioned is postblit and 
destructions. scoping will need to redefine theses.


Ultimately I love the idea and think D should go in that 
direction at some point. But now I'd prefers see things ironed 
out in general in D (@safe is a good example).


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-11 Thread deadalnix via Digitalmars-d

On Thursday, 10 July 2014 at 20:10:38 UTC, Marc Schütz wrote:
Instead of lifetime intersections with `` (I believe Timon 
proposed that in the original thread), simply specify multiple 
owners: `scope!(a, b)`. This works, because as far as I can 
see there is no need for lifetime unions, only intersections.




There are unions.

class A {
   scope!s1(A) a;
}

scope!s2(A) b;

b.a; // = this has union lifetime of s1 and s2.


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-11 Thread Jacob Carlborg via Digitalmars-d

On 10/07/14 20:15, H. S. Teoh via Digitalmars-d wrote:


class C {}
C myFunc(C obj) {
obj.doSomething();
return obj; // will be rejected if parameters are scoped by 
default
}


Hmm, why wouldn't that work? The scope where you called myFunc is 
guaranteed to outlive myFunc.


--
/Jacob Carlborg


Re: Opportunities for D

2014-07-11 Thread Nick Treleaven via Digitalmars-d

On 10/07/2014 19:03, Walter Bright wrote:

On 7/10/2014 9:00 AM, Nick Treleaven wrote:

On 09/07/2014 20:55, Walter Bright wrote:

   Unique!(int*) u = new int;   // must work


That works, it's spelled:

Unique!int u = new int;


I'm unconfortable with that design, as T can't be a class ref or a
dynamic array.


It does currently work with class references, but not dynamic arrays:

Unique!Object u = new Object;

It could be adjusted so that all non-value types are treated likewise:

Unique!(int[]) v = [1, 3, 2];


   int* p = new int;
   Unique!(int*) u = p; // must fail


The existing design actually allows that, but nulls p:

  [...]

If there are aliases of p before u is constructed, then u is not the
sole owner
of the reference (mentioned in the docs):
http://dlang.org/phobos-prerelease/std_typecons.html#.Unique


Exactly. It is not checkable and not good enough.


In that case we'd need to deprecate Unique.this(ref RefT p) then.


Note that as of 2.066 the compiler tests for uniqueness of an expression
by seeing if it can be implicitly cast to immutable. It may be possible
to do that with Unique without needing compiler modifications.


Current Unique has a non-ref constructor that only takes rvalues. Isn't 
that good enough to detect unique expressions?



Also related is whether we use alias this to expose the resource
(allowing
mutation but not replacement) or if we use opDispatch. Currently, it
uses opDot,
which AFAICT is basically opDispatch. If we use alias this, that's
a(nother)
hole exposing non-unique access to the resource.


The holes must be identified and closed.


OK, opDispatch then.


BTW, I'm amenable to adjusting the compiler to recognize Unique and help
out as a last resort.




Re: Opportunities for D

2014-07-11 Thread John Colvin via Digitalmars-d

On Thursday, 10 July 2014 at 22:50:51 UTC, Walter Bright wrote:

On 7/10/2014 1:52 PM, bearophile wrote:

Walter Bright:

I can't imagine users going to the bother of typing all that, 
let alone what
happens when they do it wrong. Most users don't really have a 
good handle on
what the lifetimes of their data are, so how are they going 
to annotate it

correctly?


I suggest you to go in the Rust mailing list and ask this 
question again.


Rust has very little experience in real projects. I know that 
people see all the hype about Rust and believe it has proved 
itself, but it hasn't.


I've read other papers about annotations in Java and how people 
just refused to annotate their references.


This might have something to do with both the mindset of Java 
(isn't the runtime supposed to take care of this sort of 
thing?) and the fact that Java is already monstrously verbose.


Re: Opportunities for D

2014-07-11 Thread Jacob Carlborg via Digitalmars-d

On 10/07/14 22:31, Walter Bright wrote:


I don't know the PR link nor do I know what pseudonym you use on github,
so please help!

I reiterate my complaint that people use virtual functions for their
github handles. There's no reason to. Who knows that 9il is actually
Ilya Yaroshenko? Took me 3 virtual function dispatches to find that out!


Talking about Github pseudonyms. I think it's very confusing that Hara 
Kenji is using a different author for his commits than his Github 
pseudonym. He commits as k-hara, which doesn't exist on Github. But 
his Github pseudonym is 9rnsr.


--
/Jacob Carlborg


Re: Opportunities for D

2014-07-11 Thread Nordlöw

On Tuesday, 8 July 2014 at 23:43:47 UTC, Meta wrote:

Is the code public already ?


https://github.com/andralex/std_allocator


Maybe Andrei should remove this outdated version to reduce 
confusion, if nobody uses it that is :)


/Per


Re: Opportunities for D

2014-07-11 Thread Sean Kelly via Digitalmars-d
On Thursday, 10 July 2014 at 21:46:50 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 10 July 2014 at 21:40:15 UTC, Sean Kelly wrote:

:-)  To compensate, I use the same virtual function literally
everywhere.  Same icon photo too.


That's Go…


And Go is awesome. I could change it to my face, but since that's 
on gravitar it would show up all over the place and I don't 
really want that.


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-11 Thread H. S. Teoh via Digitalmars-d
On Fri, Jul 11, 2014 at 08:56:10AM +0200, Jacob Carlborg via Digitalmars-d 
wrote:
 On 10/07/14 20:15, H. S. Teoh via Digitalmars-d wrote:
 
  class C {}
  C myFunc(C obj) {
  obj.doSomething();
  return obj; // will be rejected if parameters are scoped by 
  default
  }
 
 Hmm, why wouldn't that work? The scope where you called myFunc is
 guaranteed to outlive myFunc.
[...]

Because the scope of the parameter 'obj' is defined to be the scope of
myFunc only, according to the current proposal.


T

-- 
What are you when you run out of Monet? Baroque.


Re: Opportunities for D

2014-07-11 Thread Wyatt via Digitalmars-d

On Thursday, 10 July 2014 at 20:31:53 UTC, Walter Bright wrote:


I reiterate my complaint that people use virtual functions 
for their github handles. There's no reason to. Who knows that 
9il is actually Ilya Yaroshenko? Took me 3 virtual function 
dispatches to find that out!



So, final by default in D? ;)

-Wyatt


Re: Opportunities for D

2014-07-11 Thread H. S. Teoh via Digitalmars-d
On Fri, Jul 11, 2014 at 01:14:37AM +, Meta via Digitalmars-d wrote:
 On Friday, 11 July 2014 at 01:08:59 UTC, Andrei Alexandrescu wrote:
 On 7/10/14, 2:25 PM, Walter Bright wrote:
 On 7/10/2014 1:49 PM, Robert Schadek via Digitalmars-d wrote:
 https://github.com/D-Programming-Language/phobos/pull/1977
 indexOfNeither
 
 I want to defer this to Andrei.
 
 Merged. -- Andrei
 
 For any other aspiring lieutenants out there, this[0] has been sitting
 around for 5 months now.
 
 [0]https://github.com/D-Programming-Language/phobos/pull/1965#issuecomment-40362545

Not that I'm a lieutenant or anything, but I did add some comments.


T

-- 
Some days you win; most days you lose.


Re: Opportunities for D

2014-07-11 Thread Walter Bright via Digitalmars-d

On 7/11/2014 4:44 AM, Nick Treleaven wrote:

On 10/07/2014 19:03, Walter Bright wrote:

On 7/10/2014 9:00 AM, Nick Treleaven wrote:

On 09/07/2014 20:55, Walter Bright wrote:

   Unique!(int*) u = new int;   // must work


That works, it's spelled:

Unique!int u = new int;


I'm unconfortable with that design, as T can't be a class ref or a
dynamic array.


It does currently work with class references, but not dynamic arrays:

 Unique!Object u = new Object;

It could be adjusted so that all non-value types are treated likewise:

 Unique!(int[]) v = [1, 3, 2];


   int* p = new int;
   Unique!(int*) u = p; // must fail


The existing design actually allows that, but nulls p:

  [...]

If there are aliases of p before u is constructed, then u is not the
sole owner
of the reference (mentioned in the docs):
http://dlang.org/phobos-prerelease/std_typecons.html#.Unique


Exactly. It is not checkable and not good enough.


In that case we'd need to deprecate Unique.this(ref RefT p) then.


Note that as of 2.066 the compiler tests for uniqueness of an expression
by seeing if it can be implicitly cast to immutable. It may be possible
to do that with Unique without needing compiler modifications.


Current Unique has a non-ref constructor that only takes rvalues. Isn't that
good enough to detect unique expressions?


No, see the examples I gave earlier.



Re: Opportunities for D

2014-07-11 Thread via Digitalmars-d

On Thursday, 10 July 2014 at 22:53:18 UTC, Walter Bright wrote:

On 7/10/2014 1:57 PM, Marc Schütz schue...@gmx.net wrote:

That leaves relatively few cases


Right, and do those cases actually matter?



Besides what I mentioned there is also slicing and ranges (not 
only of arrays). These are more likely to be implemented as 
templates, though.


I'm a big believe in attribute inference, because explicit 
attributes are generally a failure with users.


The average end user probably doesn't need to use explicit 
annotations a lot, but they need to be there for library authors. 
I don't think it's possible to avoid annotations completely but 
still get the same functionality just by inferring them 
internally, if that is what you're aiming at...


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-11 Thread via Digitalmars-d

On Friday, 11 July 2014 at 06:49:26 UTC, deadalnix wrote:

On Thursday, 10 July 2014 at 20:10:38 UTC, Marc Schütz wrote:
Instead of lifetime intersections with `` (I believe Timon 
proposed that in the original thread), simply specify multiple 
owners: `scope!(a, b)`. This works, because as far as I can 
see there is no need for lifetime unions, only intersections.




There are unions.

class A {
   scope!s1(A) a;
}

scope!s2(A) b;

b.a; // = this has union lifetime of s1 and s2.


How so? `s2` must not extend after `s1`, because otherwise it 
would be illegal to store a `scope!s1` value in `scope!s2`. From 
the other side, `s1` must not start after `s2`.


This means that the lifetime of `b.a` is `s1`, just as it has 
been annotated, no matter what the lifetime of `b` is. In fact, 
because `s1` can be longer than `s2`, a copy of `a.b` may safely 
be kept around after `b` is deleted (but of course not longer 
than `s1`).


Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-11 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 10, 2014 at 08:10:36PM +, via Digitalmars-d wrote:
 I've been working on a proposal for ownership and borrowing since some
 time, and I seem to have come to a very similar result as you have. It
 is not really ready, because I keep discovering weaknesses, and can
 only work on it in my free time, but I'm glad this topic is finally
 addressed. I'll write about what I have now:
 
 First of all, as you've already stated, scope needs to be a type
 modifier (currently it's a storage class, I think). This has
 consequences for the syntax of any parameters it takes, because for
 type modifiers there need to be type constructors. This means, the
 `scope(...)` syntax is out. I suggest to use template instantiation
 syntax instead: `scope!(...)`, which can be freely combined with the
 type constructor syntax: `scope!lifetime(MyClass)`.
 
 Explicit lifetimes are indeed necessary, but dedicated identifiers for
 them are not. Instead, it can directly refer to symbol of the owner.
 Example:
 
 int[100] buffer;
 scope!buffer(int[]) slice;

Hmm. Seems that you're addressing a somewhat wider scope than what I had
in mind. I was thinking mainly of 'scope' as does not escape the body
of this block, but you're talking about a more general case of being
able to specify explicit lifetimes.

[...]
 A problem that has been discussed in a few places is safely returning
 a slice or a reference to an input parameter. This can be solved
 nicely:
 
 scope!haystack(string) findSubstring(
 scope string haystack,
 scope string needle
 );
 
 Inside `findSubstring`, the compiler can make sure that no references
 to `haystack` or `needle` can be escape (an unqualified `scope` can be
 used here, no need to specify an owner), but it will allow returning
 a slice from it, because the signature says: The return value will
 not live longer than the parameter `haystack`.

This does seem to be quite a compelling argument for explicit scopes. It
does make it more complex to implement, though.


[...]
 An interesting application is the old `byLine` problem, where the
 function keeps an internal buffer which is reused for every line that
 is read, but a slice into it is returned. When a user naively stores
 these slices in an array, she will find that all of them have the same
 content, because they point to the same buffer. See how this is
 avoided with `scope!(const ...)`:

This seems to be something else now. I'll have to think about this a bit
more, but my preliminary thought is that this adds yet another level of
complexity to 'scope', which is not necessarily a bad thing, but we
might want to start out with something simpler first.


[...]
 An open question is whether there needs to be an explicit designation
 of GC'd values (for example by `scope!static` or `scope!GC`), to say
 that a given values lives as long as it's needed (or forever).

Shouldn't unqualified values already serve this purpose?


[...]
 Now, for the problems:
 
 Obviously, there is quite a bit of complexity involved. I can imagine
 that inferring the scope for templates (which is essential, just as
 for const and the other type modifiers) can be complicated.

I'm thinking of aiming for a design where the compiler can infer all
lifetimes automatically, and the user doesn't have to. I'm not sure if
this is possible, but based on what Walter said, it would be best if we
infer as much as possible, since users are lazy and are unlikely to be
thrilled at the idea of having to write additional annotations on their
types.

My original proposal was aimed at this, that's why I didn't put in
explicit lifetimes. I was hoping to find a way to define things such
that the lifetime is unambiguous from the context in which 'scope' is
used, so that users don't ever have to write anything more than that.
This also makes the compiler's life easier, since we don't have to keep
track of who owns what, and can just compute the lifetime from the
surrounding context. This may require sacrificing some precision in
lifetimes, but if it helps simplify things while still giving adequate
functionality, I think it's a good compromise.


[...]
 I also have a few ideas about owned types and move semantics, but this
 is mostly independent from borrowing (although, of course, it
 integrates nicely with it). So, that's it, for now. Sorry for the long
 text. Thoughts?

It seems that you're the full borrowed reference/pointer problem, which
is something necessary. But I was thinking more in terms of the baseline
functionality -- what is the simplest design for 'scope' that still
gives useful semantics that covers most of the cases? I know there are
some tricky corner cases, but I'm wondering if we can somehow find an
easy solution for the easy parts (presumably the more common parts),
while still allowing for a way to deal with the hard parts.

At least for now, I'm thinking in the direction of finding something
with simple semantics that, at the same time, 

Re: Proposal for design of 'scope' (Was: Re: Opportunities for D)

2014-07-11 Thread H. S. Teoh via Digitalmars-d
On Fri, Jul 11, 2014 at 06:41:47AM +, deadalnix via Digitalmars-d wrote:
[...]
 On Thursday, 10 July 2014 at 17:04:24 UTC, H. S. Teoh via Digitalmars-d
 wrote:
- For function parameters, this lifetime is the scope of the
function body.
 
 Some kind of inout scope seem less limiting. The caller know the
 scope, the callee know that is is greater than itself. It is important
 as local variable in the outer scope of the function have more
 restricted scope and must not be assignable.
 
 Each parameter have a DIFFERENT lifetime, but it is impossible to tell
 which one is larger from the callee perspective. Thus you must have a
 more complex lifetime definition than grater/smaller lifetime. Yup,
 when you get into the details, quantum effects start to arise.

Looks like we might need to use explicit lifetimes for this. Unless
there's a way to simplify it -- i.e., we don't always need exact
lifetimes, as long as the estimated lifetime is never larger than the
actual lifetime. From the perspective of the callee, for example, if the
lifetimes of both parameters are longer than it can see (i.e., longer
than the lifetimes of its parent lexical scopes) then it doesn't matter
what the exact lifetimes are, it can be treated as an unknown value with
a lower bound, as long as it never tries to assign anything with
lifetime = that lower bound. The caller already knows what these
lifetimes are from outside, but the function may not need to know.

At least, I'm hoping this kind of simplifications will still allow us to
do what we need, while reducing complexity.


- An unscoped variable is regarded to have infinite lifetime.
 
 
 So it is not unscoped, but I'm simply nitpicking on that one.

Well, yes, the reason I wrote that line was to make the definition
uniform across both scoped and unscoped types. :)


   - Since a scoped return type has its lifetime as part of its
 type, the type system ensures that scoped values never
 escape their lifetime. For example, if we are sneaky and
 return a pointer to an inner function, the type system will
 prevent leakage of the
 
 This get quite tricky to define when you can have both this and a
 context pointer. Once again, you get into a situation where you have 2
 non sortable lifetime to handle. And worse, you'll be creating values
 out of that mess :)

Is it possible to simplify this by taking the minimum of the two
lifetimes (i.e. intersection)? Or will that run into unsolvable cases?


 - Aggregates:
 
- It's turtles all the way down: members of scoped aggregates
  also have scoped type, with lifetime inherited from the parent
  aggregate. In other words, the lifetime of the aggregate is
  transitive to the lifetime of its members.
 
 Yes rule for access is transitivity. But the rule to write is
 antitransitive. It gets tricky when you consider that a member
 variable may have to be able to extend the lifetime of one of its
 member.
 
 IE a member of lifetime B in a value of lifetime A sees it lifetime
 becoming max(A, B). Considering lifetime aren't always sortable (as
 show in 2 examples), this is tricky.
 
 This basically means that you have to define what happen for non
 sortable lifetime, and what happen for union/intersection of lifetime.
 As you see, I've banged my head quite a lot on that one. I'm fairly
 confident that this is solvable, but definitively require a lot of
 effort to iron out all the details.

Along these lines, I'm wondering if turtles all the way down is the
wrong way of looking at it. Consider, for example, an n-level deep
nesting of aggregates. If obj.nest1 is const, then obj.nest1.nest2.x
must also be const, because otherwise we break the const system. So
const is transitive downwards. But if obj.nest1 is a scoped reference
type with lifetime L1, that doesn't necessarily mean obj.nest1.y only
has lifetime L1. It may be a pointer that points to an infinite lifetime
object, for example, so it's not a problem that the pointer goes out of
scope before the object pointed to. OTOH, if obj.nest1 has scope L1,
then obj itself cannot have a longer lifetime than L1, otherwise we may
access obj.nest1 after its lifetime is over. So the lifetime of
obj.nest1 must propagate *upwards* (or outwards).

This means that scope is transitive outwards, which is the opposite of
const/immutable, which are transitive inwards! So it's not turtles all
the way down, but pigeons all the way up. :-P


 - Passing parameters: since unscoped values are regarded to have
   infinite lifetime, it's OK to pass unscoped values into scoped
   function parameters: it's a narrowing of lifetime of the original
   value, which is allowed. (What's not allowed is expanding the
   lifetime of a scoped value.)
 
 
 Get rid of the whole concept of unscopped, and you get rid of a whole
 class of redundant definition that needs to be done.

OK, let's just say unscoped == scope with infinite lifetime. :)


 I'm sure there are plenty 

Re: Opportunities for D

2014-07-11 Thread Brad Anderson via Digitalmars-d

On Thursday, 10 July 2014 at 21:29:30 UTC, Dmitry Olshansky wrote:


Not digging into the whole thread.

9. Extensible I/O package to replace our monolitic std.stdio 
sitting awkwardly on top of C's legacy. That would imply 
integrating it with sockets/pipes and filters/codecs 
(compression, transcoding and the like) as well.


I was looking of into it with Steven, but currently have little 
spare time (and it seems he does too). I'd gladly guide a 
capable recruit to join the effort in proper rank.


A short write up of where Steven left off on his work would 
probably help kickstart this.


I remember you had some great ideas for handling buffering but it 
changed a few times during that thread and I don't remember what 
the final idea was.


Re: Opportunities for D

2014-07-10 Thread Timon Gehr via Digitalmars-d

On 07/10/2014 07:41 AM, H. S. Teoh via Digitalmars-d wrote:

On Thu, Jul 10, 2014 at 05:12:23AM +0200, Timon Gehr via Digitalmars-d wrote:
[...]

- Lifetime parameters. (it's more future-proof if they are not
introduced by simple identifiers.)

Eg.: void foo[lifetime lt](int x){ ... }


- Attaching a lifetime to a pointer, class reference, ref argument.

Eg.: void foo[lifetime lt](int scope(lt)* x){ ...}
  void foo[lifetime lt](scope(lt) C c){ ... }
  void foo[lifetime lt](scope(lt) ref int x){ ... }
  void foo[lifetime lt1,lifetime lt2](scope(lt1)(C)scope(lt2)[] a){ ... }

(The last example talks about a slice where the array memory has
different lifetimes than the class instances it contains.)

[...]

This is starting to look like some parts of my 'scope' proposal in
another part of this thread.

I'm wondering if it makes sense to simplify lifetimes


(This is not complicated.)


by tying them to lexical context


They are.


rather than using explicit annotations?


Suitable rules can be added to automatically do some sensible thing by 
default, but I don't think it makes sense to try and guess suitable 
lifetimes just by staring at a function signature in the general case.



Being able to
specify explicit lifetimes seem a bit excessive to me, but perhaps you
have a use case in mind that I'm not aware of?
...


If lifetimes cannot transcend function invocations, this is a serious 
limitation, don't you agree? How would you do e.g. an identity function 
on a borrowed pointer type, to name a simple example?


Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/9/14, 8:59 PM, logicchains wrote:

On Thursday, 10 July 2014 at 02:12:18 UTC, Atila Neves wrote:

Rob Pike has said multiple times that the key/unique thing about Go is
select and that goroutines are the easy part. I'm not entirely sure
I grok what he means but we should if we're going to try and do what
they got right.


Select is vital for Go in the sense that without it there'd be no way to
do non-blocking send/receives on channels in Go. It's much more concise
than the alternative, which would be something like `if chanA is empty
then foo(chanA) else if chanB is empty then bar(chanB)`. It also avoids
starvation by checking the channels in a random order - unlike the
previous if-else chain, which would never call bar(chanB) if chanA was
always empty.


That's what I think as well.


It's been implemented in Rust[1] via a macro, and can be implemented in
Haskell[2] without compiler support, so I'd be surprised if it wasn't
already possible to implement in D. It wouldn't however be as useful as
Go's until D gets message passing between fibres.


Yah.


Actually, an important question that should be considered: does D want
actor-style concurrency, like Erlang and Akka, or CSP-style concurrency,
like Rust, Go and Haskell? Or both? Deciding this would allow efforts to
be more focused.


We already have actor-style via std.concurrency. We also have fork-join 
parallelism via std.parallel. What we need is a library for CSP.



Andrei




Re: Opportunities for D

2014-07-10 Thread logicchains via Digitalmars-d
On Thursday, 10 July 2014 at 05:58:56 UTC, Andrei Alexandrescu 
wrote:
We already have actor-style via std.concurrency. We also have 
fork-join parallelism via std.parallel. What we need is a 
library for CSP.


The actor-style via std.concurrency is only between 'heavyweight' 
threads though, no? Even if lightweight threads may be overhyped, 
part of the appeal of Go and Erlang is that one can spawn tens of 
thousands of threads and it 'just works'. It allows the server 
model of 'one green thread/actor per client', which has a certain 
appeal in its simplicity. Akka similarly uses its own lightweight 
threads, not heavyweight JVM threads.


Think of it from the perspective of attracting Erlang 
programmers, or Java/Scala programmers who use Akka. If they 
tried out std.concurrency and found that it failed horribly when 
trying to spawn fifty thousand actors, they'd be unlikely to 
stick with the language.


Message passing between lightweight threads can also be much 
faster than message passing between heavyweight threads; take a 
look at the following message-passing benchmark and compare 
Haskell, Go and Erlang to the languages using OS threads: 
http://benchmarksgame.alioth.debian.org/u64q/performance.php?test=threadring


Re: Opportunities for D

2014-07-10 Thread Walter Bright via Digitalmars-d

On 7/9/2014 8:12 PM, Timon Gehr wrote:

3. have a design and a plan that gets there

There's no law that says D refs must be exactly like Rust borrowed. We
can come up with a design that works best for D. D != Rust. Do you have
a design in mind?
...


Roughly, but not with 'ref'. It is also an issue of syntax at this point. I
think we should get at least the basics fixed there before talking in-depth
about semantics. (In any case, I still have one DIP pending in an unacceptable
state that I couldn't find the time to write down properly yet.)



Fundamentally, we need syntax for (examples provided for illustration, those are
not proposals):

- Parametric polymorphism

Eg.: void foo[A](int x){ ... }


What does that do?



- Lifetime parameters. (it's more future-proof if they are not introduced by
simple identifiers.)

Eg.: void foo[lifetime lt](int x){ ... }


??



- Attaching a lifetime to a pointer, class reference, ref argument.

Eg.: void foo[lifetime lt](int scope(lt)* x){ ...}
  void foo[lifetime lt](scope(lt) C c){ ... }
  void foo[lifetime lt](scope(lt) ref int x){ ... }
  void foo[lifetime lt1,lifetime lt2](scope(lt1)(C)scope(lt2)[] a){ ... }

(The last example talks about a slice where the array memory has different
lifetimes than the class instances it contains.)


This seems awfully complicated.



- Lifetime intersection:

Eg.: scope(lt1lt2)Tuple!(int*,int*) pair[lifetime lt1,lifetime lt2](int
scope(lt1)* p1, int scope(lt1)* p2){ ... }

(It can alternatively be done only implicitly at function boundaries.)


- Specifying the lifetime of a struct/class upon construction:

Eg.: struct S[lifetime lt1,lifetime lt2]{
  ...
  this(int scope(lt1)* x, int scope(lt2)* y)scope(lt1lt2){ ... }
  }





Re: Opportunities for D

2014-07-10 Thread deadalnix via Digitalmars-d

On Wednesday, 9 July 2014 at 19:50:18 UTC, Walter Bright wrote:

8. NotNull!T type

For those that want a non-nullable reference type. This 
should be doable

as a library type.

No.


Rationale?


Please, we've gone through this again and again and again and 
again.


Re: Opportunities for D

2014-07-10 Thread deadalnix via Digitalmars-d

On Wednesday, 9 July 2014 at 20:51:04 UTC, Walter Bright wrote:

On 7/9/2014 1:35 PM, Andrei Alexandrescu wrote:

Hmmm... how about using u after that?


Using u after that would either cause an exception to be 
thrown, or they'd get T.init as a value. I tend to favor the 
latter, but of course those decisions would have to be made as 
part of the design of Unique.


So runtime error or php style better anything than nothing for 
something that can be checked statically...


Re: Opportunities for D

2014-07-10 Thread Paolo Invernizzi via Digitalmars-d

On Thursday, 10 July 2014 at 06:32:32 UTC, logicchains wrote:
On Thursday, 10 July 2014 at 05:58:56 UTC, Andrei Alexandrescu 
wrote:
We already have actor-style via std.concurrency. We also have 
fork-join parallelism via std.parallel. What we need is a 
library for CSP.


The actor-style via std.concurrency is only between 
'heavyweight' threads though, no? Even if lightweight threads 
may be overhyped, part of the appeal of Go and Erlang is that 
one can spawn tens of thousands of threads and it 'just works'. 
It allows the server model of 'one green thread/actor per 
client', which has a certain appeal in its simplicity. Akka 
similarly uses its own lightweight threads, not heavyweight JVM 
threads.


As Sean wrote, please check [1] or if you need it right now, Vibe 
can offer what you need today...

---
Paolo


[1] https://github.com/D-Programming-Language/phobos/pull/1910


Re: Opportunities for D

2014-07-10 Thread Walter Bright via Digitalmars-d

On 7/9/2014 11:59 PM, deadalnix wrote:

On Wednesday, 9 July 2014 at 19:50:18 UTC, Walter Bright wrote:

8. NotNull!T type

For those that want a non-nullable reference type. This should be doable
as a library type.

No.


Rationale?


Please, we've gone through this again and again and again and again.


Please point me to where it was.


Re: Opportunities for D

2014-07-10 Thread Walter Bright via Digitalmars-d

On 7/10/2014 12:03 AM, deadalnix wrote:

So runtime error or php style better anything than nothing for something that
can be checked statically...


I don't understand your comment.


Re: Opportunities for D

2014-07-10 Thread bearophile via Digitalmars-d

Walter Bright:


Exactly. I'm not seeing how this can work that well.


Do you have an example where this works badly? You can require 
the @notnull annotations on the arguments at module/package 
boundaries.


But I think this thread tries to face too many problems in 
parallel. Even just the borrowing/lifetime topic is plenty large 
for a single discussion thread. I suggest to create single-topic 
threads.


Bye,
bearophile


Re: Opportunities for D

2014-07-10 Thread Walter Bright via Digitalmars-d

On 7/10/2014 12:23 AM, Walter Bright wrote:

On 7/9/2014 11:59 PM, deadalnix wrote:

On Wednesday, 9 July 2014 at 19:50:18 UTC, Walter Bright wrote:

8. NotNull!T type

For those that want a non-nullable reference type. This should be doable
as a library type.

No.


Rationale?


Please, we've gone through this again and again and again and again.


Please point me to where it was.


Or better yet, what is your proposal?


Re: Opportunities for D

2014-07-10 Thread Jacob Carlborg via Digitalmars-d

On 09/07/14 15:45, Meta wrote:


As far as I know, there's no reason we can't add pattern matching to
switch or final switch or both. There's no ambiguity because right now
it's not possible to switch on structs or classes. See Kenji's DIP32 for
syntax for tuples that could be leveraged.


There's no reason why we can add a completely new construct for this 
either. But it's usually easier to get a new function into Phobos then 
changing the language.


--
/Jacob Carlborg


Re: Opportunities for D

2014-07-10 Thread Jacob Carlborg via Digitalmars-d

On 10/07/14 05:59, logicchains wrote:


It's been implemented in Rust[1] via a macro, and can be implemented in
Haskell[2] without compiler support, so I'd be surprised if it wasn't
already possible to implement in D. It wouldn't however be as useful as
Go's until D gets message passing between fibres.


Another use case for AST macros, which we don't have :(

--
/Jacob Carlborg


Re: Opportunities for D

2014-07-10 Thread Dicebot via Digitalmars-d

On Wednesday, 9 July 2014 at 19:47:02 UTC, Walter Bright wrote:

Yes, I mean transitive, and understand what that implies.


I am positively shocked :)

I have started work on porting the CDGC to D2, have compilable 
version (that was
easy thanks to earlier Sean work) but updating implementation 
to match new

druntime and pass tests will take quite some time.


Is CDGC's Luca's earlier work on concurrent GC?


Yes.

I'd state it differently: Marketing fuss about goroutines is 
the killer feature
of Go :) It does not have any fundamental advantage over 
existing actor model

and I doubt it will matter _that_ much.


Much of the froth about Go is dismissed by serious developers, 
but they nailed the goroutine thing. It's Go's killer feature.


Who are they? I don't know any serious developer who praises 
goroutines if he was not a CSP fan before. I forsee that it will 
make no impact for D because we simply don't have resources to 
advertise it as killing feature (on a same scale Go did).


Well, of course, if someone wants to waste his time on this - no 
objections from my side :)


I don't know where it comes from but non-nullable reference 
type has ZERO value

if it is not the default one.


Making it the default is impossible for D. However,

  class _C { ... }
  alias NotNull!_C C;

is entirely practical. It's not unlike the common C practice:

  typedef struct S { ... } S;

to bring S out of the tag name space.


You are totally missing the point if you consider this even 
comparable replacement. Reason why non-nullable types are awesome 
because you are 100% sure compiler will force you to handle null 
cases and if program compiles it is guaranteed to be safe in that 
regard. What you propose makes hardly any difference.


Re: Opportunities for D

2014-07-10 Thread Dicebot via Digitalmars-d

On Thursday, 10 July 2014 at 03:59:15 UTC, logicchains wrote:
Actually, an important question that should be considered: does 
D want actor-style concurrency, like Erlang and Akka, or 
CSP-style concurrency, like Rust, Go and Haskell? Or both? 
Deciding this would allow efforts to be more focused.


AFAICS D already has actor-style concurrency with vibe.d 
extensions for std.concurrency so this is an easy choice ;)


Re: Opportunities for D

2014-07-10 Thread logicchains via Digitalmars-d

On Thursday, 10 July 2014 at 10:43:39 UTC, Dicebot wrote:

On Thursday, 10 July 2014 at 03:59:15 UTC, logicchains wrote:
Actually, an important question that should be considered: 
does D want actor-style concurrency, like Erlang and Akka, or 
CSP-style concurrency, like Rust, Go and Haskell? Or both? 
Deciding this would allow efforts to be more focused.


AFAICS D already has actor-style concurrency with vibe.d 
extensions for std.concurrency so this is an easy choice ;)


Are there any tutorials or blog posts out there demonstrating how 
to use this? I think posts along the lines of This is a 
CSP/message passing program in Go/Erlang. This is the same 
program translated into D; look how concise and faster it is! 
could attract a lot of interest.


Reading the code in the pull request [1], for instance, makes me 
wonder how to tell if `spawn()` is spawning a thread or a fibre. 
Can a tid refer to a fibre? If so, why's it called a thread ID, 
and how do I tell if a particular tid refers to a thread or 
fibre? It would be great to have these kinds of questions 
answered in an easily available reference (for instance, the 
documentation for std.concurrency, which currently doesn't even 
mention fibres or vibe.d).


1. https://github.com/D-Programming-Language/phobos/pull/1910


Re: Opportunities for D

2014-07-10 Thread Dicebot via Digitalmars-d

On Thursday, 10 July 2014 at 11:03:20 UTC, logicchains wrote:
Are there any tutorials or blog posts out there demonstrating 
how to use this? I think posts along the lines of This is a 
CSP/message passing program in Go/Erlang. This is the same 
program translated into D; look how concise and faster it is! 
could attract a lot of interest.


There are no detailed blog posts and I believe only few 
developers are really well proficient with this vibe.d 
functionality. Some relevant docs:


http://vibed.org/api/vibe.core.core/
http://vibed.org/api/vibe.core.concurrency/
http://vibed.org/api/vibe.core.task/

Reading the code in the pull request [1], for instance, makes 
me wonder how to tell if `spawn()` is spawning a thread or a 
fibre. Can a tid refer to a fibre? If so, why's it called a 
thread ID, and how do I tell if a particular tid refers to a 
thread or fibre? It would be great to have these kinds of 
questions answered in an easily available reference (for 
instance, the documentation for std.concurrency, which 
currently doesn't even mention fibres or vibe.d).


1. https://github.com/D-Programming-Language/phobos/pull/1910



Problem is that this is most simple PR to simply add 
message-passing support for fibers. Adding some advanced 
schedulers with worker thread pool can be expected to be done on 
top but.. This small PR has been rotting there for ages with 
pretty much zero attention but from few interested persons.


I can't blame Sonke or anyone else for not wanting to waste his 
time on pushing more stuff upstream considering how miserable 
contribution experience is right now. We can't really expect 
anything else to improve why it stays that bad - Andrei has 
mentioned that during his keynote but nothing has been ever done 
to improve the situation.


Re: Opportunities for D

2014-07-10 Thread Jacob Carlborg via Digitalmars-d

On 10/07/14 01:57, H. S. Teoh via Digitalmars-d wrote:


[...]
I'm sure there are plenty of holes in this proposal, so destroy away.
;-)


You should post this in a new thread.

I'm wondering if a lot more data can be statically allocated. Then 
passed by reference to functions taking scope parameters. This should be 
safe since the parameter is guaranteed to outlive the function call.


--
/Jacob Carlborg


Re: Opportunities for D

2014-07-10 Thread bearophile via Digitalmars-d

Dicebot:

I can't blame Sonke or anyone else for not wanting to waste his 
time on pushing more stuff upstream considering how miserable 
contribution experience is right now.


This was one of the causes of the creation of Tango and its 
fiasco, so better to not repeat that.


Bye,
bearophile


Re: Opportunities for D

2014-07-10 Thread Puming via Digitalmars-d

On Thursday, 10 July 2014 at 11:19:26 UTC, Dicebot wrote:

On Thursday, 10 July 2014 at 11:03:20 UTC, logicchains wrote:
Are there any tutorials or blog posts out there demonstrating 
how to use this? I think posts along the lines of This is a 
CSP/message passing program in Go/Erlang. This is the same 
program translated into D; look how concise and faster it is! 
could attract a lot of interest.


There are no detailed blog posts and I believe only few 
developers are really well proficient with this vibe.d 
functionality. Some relevant docs:


http://vibed.org/api/vibe.core.core/
http://vibed.org/api/vibe.core.concurrency/
http://vibed.org/api/vibe.core.task/

Reading the code in the pull request [1], for instance, makes 
me wonder how to tell if `spawn()` is spawning a thread or a 
fibre. Can a tid refer to a fibre? If so, why's it called a 
thread ID, and how do I tell if a particular tid refers to a 
thread or fibre? It would be great to have these kinds of 
questions answered in an easily available reference (for 
instance, the documentation for std.concurrency, which 
currently doesn't even mention fibres or vibe.d).


1. https://github.com/D-Programming-Language/phobos/pull/1910



Problem is that this is most simple PR to simply add 
message-passing support for fibers. Adding some advanced 
schedulers with worker thread pool can be expected to be done 
on top but.. This small PR has been rotting there for ages with 
pretty much zero attention but from few interested persons.


I can't blame Sonke or anyone else for not wanting to waste his 
time on pushing more stuff upstream considering how miserable 
contribution experience is right now. We can't really expect 
anything else to improve why it stays that bad - Andrei has 
mentioned that during his keynote but nothing has been ever 
done to improve the situation.


Scala has also a history of three implementations of actor 
system, until akka is merged into official support (with Akka's 
arthur Jonas Bonér becoming CTO of Typesafe).


In vibe.d's case I think first people don't really know that 
there is a good fiber based actor system already in vibe.d, for 
that I think it would benefit by separating out to be a 
standalone library, then add more functionalities like location 
transparency of actors in akka. Otherwise people would only 
recognize vibe.d as a networking lib, with no intent to look for 
actors there.




Re: Opportunities for D

2014-07-10 Thread Dicebot via Digitalmars-d

On Thursday, 10 July 2014 at 12:13:03 UTC, bearophile wrote:

Dicebot:

I can't blame Sonke or anyone else for not wanting to waste 
his time on pushing more stuff upstream considering how 
miserable contribution experience is right now.


This was one of the causes of the creation of Tango and its 
fiasco, so better to not repeat that.


Bye,
bearophile


No one but Walter / Andrei can do anything about it. Right now we 
are in weird situation when they call for lieutenants but are 
not ready to abandon decision power. It can't possibly work that 
way. No amount of volunteer effort will help when so many PR 
stall waiting for resolution comment from one of language 
generals.


Re: Opportunities for D

2014-07-10 Thread bearophile via Digitalmars-d

Dicebot:

No one but Walter / Andrei can do anything about it. Right now 
we are in weird situation when they call for lieutenants but 
are not ready to abandon decision power. It can't possibly work 
that way. No amount of volunteer effort will help when so many 
PR stall waiting for resolution comment from one of language 
generals.


It seems an important topic. Pull reverts (like: 
https://github.com/D-Programming-Language/phobos/commit/e5f7f41d253aacc601be64b5a1e4f24cd5ecfc32 
) aren't process failures, they should be normal parts of the 
dmd/Phobos development process. Even if 5-8% of the merges gets 
reverted, it's still OK. And now there is the cherry picking, so 
it's hard to pollute betas with bad patches.


Bye,
bearophile


Re: Opportunities for D

2014-07-10 Thread Wyatt via Digitalmars-d
On Wednesday, 9 July 2014 at 23:58:39 UTC, H. S. Teoh via 
Digitalmars-d wrote:


So here's a first stab at refining (and extending) what 'scope' 
should be:


In general, I like it, but can scopedness be inferred?  The 
impression I get from this is we're supposed to manually annotate 
every scoped everything, which IMO kind of moots the benefits in 
a broad sense.


If it _cannot_ be inferred (even if imperfectly), then I wonder 
if it doesn't make more sense to invert the proposed default and 
require annotation when scope restrictions need to be eased.  The 
ideal seems like it could be a major blow against non-local 
errors, but relying on convention isn't desirable.


Of course, in fairness, I may be misunderstanding the application 
of this entirely...?


-Wyatt


Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/9/14, 11:59 PM, deadalnix wrote:

On Wednesday, 9 July 2014 at 19:50:18 UTC, Walter Bright wrote:

8. NotNull!T type

For those that want a non-nullable reference type. This should be
doable
as a library type.

No.


Rationale?


Please, we've gone through this again and again and again and again.


Yes, the arguments come and go by in forum discussions. To avoid this we 
need a well-written DIP that has a section illustrating the 
insufficiencies of library solutions, and then proposes the few needed 
additions to the language that make the thing work properly. Other 
language communities have done this with good results.


Andrei



Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/10/14, 12:54 AM, Walter Bright wrote:

On 7/10/2014 12:23 AM, Walter Bright wrote:

On 7/9/2014 11:59 PM, deadalnix wrote:

On Wednesday, 9 July 2014 at 19:50:18 UTC, Walter Bright wrote:

8. NotNull!T type

For those that want a non-nullable reference type. This should be
doable
as a library type.

No.


Rationale?


Please, we've gone through this again and again and again and again.


Please point me to where it was.


Or better yet, what is your proposal?


DIP please. -- Andrei


Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/10/14, 12:21 AM, Walter Bright wrote:

On 7/10/2014 12:03 AM, deadalnix wrote:

So runtime error or php style better anything than nothing for
something that
can be checked statically...


I don't understand your comment.


It's very simple. The semantics you propose is move with the syntax of 
copy. Following the implicit move, the source of it is surprisingly 
modified (emptied).


That doesn't work. There is a humongous body of knowledge accumulated in 
C++ with std::auto_ptr. That artifact has been quite the show, including 
people who swore by it (!). We'd do good to simply draw from that 
experience instead of reenacting it.



Andrei



Re: Opportunities for D

2014-07-10 Thread John Colvin via Digitalmars-d

On Thursday, 10 July 2014 at 13:09:42 UTC, bearophile wrote:

Dicebot:

No one but Walter / Andrei can do anything about it. Right now 
we are in weird situation when they call for lieutenants but 
are not ready to abandon decision power. It can't possibly 
work that way. No amount of volunteer effort will help when so 
many PR stall waiting for resolution comment from one of 
language generals.


It seems an important topic. Pull reverts (like: 
https://github.com/D-Programming-Language/phobos/commit/e5f7f41d253aacc601be64b5a1e4f24cd5ecfc32 
) aren't process failures, they should be normal parts of the 
dmd/Phobos development process. Even if 5-8% of the merges gets 
reverted, it's still OK. And now there is the cherry picking, 
so it's hard to pollute betas with bad patches.


Bye,
bearophile


Yes. An advantage of a structured formal release process is that 
it frees up development to make mistakes in the short term.


Re: Opportunities for D

2014-07-10 Thread John Colvin via Digitalmars-d

On Thursday, 10 July 2014 at 12:54:19 UTC, Dicebot wrote:

On Thursday, 10 July 2014 at 12:13:03 UTC, bearophile wrote:

Dicebot:

I can't blame Sonke or anyone else for not wanting to waste 
his time on pushing more stuff upstream considering how 
miserable contribution experience is right now.


This was one of the causes of the creation of Tango and its 
fiasco, so better to not repeat that.


Bye,
bearophile


No one but Walter / Andrei can do anything about it. Right now 
we are in weird situation when they call for lieutenants but 
are not ready to abandon decision power. It can't possibly work 
that way. No amount of volunteer effort will help when so many 
PR stall waiting for resolution comment from one of language 
generals.


To be fair to Walter/Andrei, you need to be clear who your 
lieutenant is before you can delegate to them.


Who has stepped up to take charge of concurrency in D?


Re: Opportunities for D

2014-07-10 Thread Dicebot via Digitalmars-d

On Thursday, 10 July 2014 at 14:09:41 UTC, John Colvin wrote:
To be fair to Walter/Andrei, you need to be clear who your 
lieutenant is before you can delegate to them.


Who has stepped up to take charge of concurrency in D?


I think it should be other way around - announcing slot with 
listed responsibilities / decision power and asking for 
volunteers, same as it was done with release process tzar 
(kudos Andrew).


Just stepping up is a no-op action without explicit delegation. 
Also I believe every such domain needs two persons in charge and 
not just one - for example, Sean Kelly is most suitable candidate 
for such role but who accept his PR then? :)


Re: Opportunities for D

2014-07-10 Thread John Colvin via Digitalmars-d

On Thursday, 10 July 2014 at 14:14:20 UTC, Dicebot wrote:

On Thursday, 10 July 2014 at 14:09:41 UTC, John Colvin wrote:
To be fair to Walter/Andrei, you need to be clear who your 
lieutenant is before you can delegate to them.


Who has stepped up to take charge of concurrency in D?


I think it should be other way around - announcing slot with 
listed responsibilities / decision power and asking for 
volunteers, same as it was done with release process tzar 
(kudos Andrew).


Just stepping up is a no-op action without explicit 
delegation. Also I believe every such domain needs two persons 
in charge and not just one - for example, Sean Kelly is most 
suitable candidate for such role but who accept his PR then? :)


@ Walter  Andrei
Would a list of subject areas that require delegation be a good 
idea to put on the wiki? A list of positions, both available and 
filled?


Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/10/14, 5:54 AM, Dicebot wrote:

On Thursday, 10 July 2014 at 12:13:03 UTC, bearophile wrote:

Dicebot:


I can't blame Sonke or anyone else for not wanting to waste his time
on pushing more stuff upstream considering how miserable contribution
experience is right now.


This was one of the causes of the creation of Tango and its fiasco, so
better to not repeat that.

Bye,
bearophile


No one but Walter / Andrei can do anything about it. Right now we are in
weird situation when they call for lieutenants but are not ready to
abandon decision power.


In the military (where the metaphor has been drawn for) there are 
lieutenants and there's no abandonment of decision power. Of course I 
wouldn't push the simile too much.



It can't possibly work that way. No amount of
volunteer effort will help when so many PR stall waiting for resolution
comment from one of language generals.


I'll make a pass, but on the face of it I disagree.

There's just lots and lots and lots of obviously good things that just 
don't get done until Walter or I do them. Last example I remember is 
video links for the DConf 2014 talks on dconf.org. The SMALLEST and 
OBVIOUSLY GOOD THING anyone could imagine. Someone on reddit mentioned 
we should put them there. Nobody in the community did anything about it 
until I posted the pull request for day 1 (http://goo.gl/9EUXv1) and 
Walter pulled it (http://goo.gl/O22dsa).


In the meantime, everybody's busy arguing the minutia of logo redesign. 
The length (not existence) of that thread is a piece of evidence of 
what's wrong with our community.


Looking at https://github.com/D-Programming-Language/phobos/pulls, I 
agree there are a few controversial pull requests that are explicitly 
waiting for me, such as 
https://github.com/D-Programming-Language/phobos/pull/1010. I'd need a 
fair amount of convincing that that's a frequent case. Looking at the 
second oldest pull request 
(https://github.com/D-Programming-Language/phobos/pull/1138) that's just 
a documentation pull, on which I myself last asked about status on March 15.


Furthermore there are just a good amount of pull requests that have 
nothing to do with any leadership. E.g. 
https://github.com/D-Programming-Language/phobos/pull/1527 is some 
apparently work that's just sitting there abandoned.


Switching to newer pull requests, there are simple and obviously good 
pull requests that just sit there for anyone to pull. And that includes 
you, Dicebot, since a few seconds ago. Since you don't mince words when 
criticizing the leadership you may as well put your money where your 
mouth is. https://github.com/D-Programming-Language/phobos/pull/2300 for 
example is simple, obviously good, and could be pulled in a minute by 
any of our 24, pardon, 25 core pullers who has a basic understanding of 
@trusted.


Then there's stuff I have no expertise in such as 
https://github.com/D-Programming-Language/phobos/pull/2307. Not only I'm 
not on hook for that, I better not discuss and pull that and leave it to 
someone who knows curl better.


Of course that doesn't undo the fact that Walter and I are on hook for a 
number of things. What I'm saying is I disagree with the allegation that 
no amount of volunteer effort will help. From the looks of things, 
we're in dire need of volunteer effort.



Andrei



Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/10/14, 7:24 AM, John Colvin wrote:

On Thursday, 10 July 2014 at 14:14:20 UTC, Dicebot wrote:

On Thursday, 10 July 2014 at 14:09:41 UTC, John Colvin wrote:

To be fair to Walter/Andrei, you need to be clear who your lieutenant
is before you can delegate to them.

Who has stepped up to take charge of concurrency in D?


I think it should be other way around - announcing slot with listed
responsibilities / decision power and asking for volunteers, same as
it was done with release process tzar (kudos Andrew).

Just stepping up is a no-op action without explicit delegation. Also
I believe every such domain needs two persons in charge and not just
one - for example, Sean Kelly is most suitable candidate for such role
but who accept his PR then? :)


@ Walter  Andrei
Would a list of subject areas that require delegation be a good idea to
put on the wiki? A list of positions, both available and filled?


I think that's a good idea, I'll think of it.

In the meantime there seems to be a want of even foot soldiers and 
corporals, which seems to be a good way to promote lieutenants. In the 
post I just sent I pointed out a number of good and absolutely trivial 
pull requests for https://github.com/D-Programming-Language/phobos that 
simply sit there for days and weeks.



Andrei



Re: Opportunities for D

2014-07-10 Thread Dicebot via Digitalmars-d
On Thursday, 10 July 2014 at 14:30:38 UTC, Andrei Alexandrescu 
wrote:
Then there's stuff I have no expertise in such as 
https://github.com/D-Programming-Language/phobos/pull/2307. Not 
only I'm not on hook for that, I better not discuss and pull 
that and leave it to someone who knows curl better.


I agree with most of what you have said (and definitely can be 
blamed guilty) but this situation was exactly what I had in mind 
for original rant. You don't feel that you are competent enough 
to judge that PR and everyone else (including those few who 
possibly can be proficient) does not feel authoritative enough 
make the decision. It is not your personal failure (and I 
apologize if my words sound like that) but general organizational 
problem.


I believe calling for explicit domains of responsibility ( as 
opposed to just giving push access :'( ) is one way to address 
that. Quite likely there are better approaches but those are for 
someone else to propose.


Re: Opportunities for D

2014-07-10 Thread Sean Kelly via Digitalmars-d
On Wednesday, 9 July 2014 at 21:47:47 UTC, Andrei Alexandrescu 
wrote:

On 7/9/14, 1:51 PM, Walter Bright wrote:

On 7/9/2014 1:35 PM, Andrei Alexandrescu wrote:

Hmmm... how about using u after that?


Using u after that would either cause an exception to be 
thrown, or
they'd get T.init as a value. I tend to favor the latter, but 
of course
those decisions would have to be made as part of the design of 
Unique.


That semantics would reenact the auto_ptr disaster so probably 
wouldn't be a good choice. -- Andrei


The problem with auto_ptr is that people rarely used it for what 
it was designed for.  Probably because it was the only smart 
pointer in the STL.  As I'm sure you're aware, the purpose of 
auto_ptr is to explicitly define ownership transfer of heap data. 
 For that it's pretty much perfect, and I use it extensively.  It 
looks like unique_ptr is pretty much the same, but with a 
facelift.  Underneath it still performs destructive copies, 
unless I've misread the docs.


Re: Opportunities for D

2014-07-10 Thread Dicebot via Digitalmars-d

E.g.

https://github.com/D-Programming-Language/phobos/pull/1527 is some
apparently work that's just sitting there abandoned.

Hm, slightly OT: is it considered widely acceptable to take over 
such pull requests by reopening rebased one with identical 
content? I presume Boost licensing implies so but not sure 
everyone else expects the same.


Re: Opportunities for D

2014-07-10 Thread Sean Kelly via Digitalmars-d

On Thursday, 10 July 2014 at 06:32:32 UTC, logicchains wrote:
On Thursday, 10 July 2014 at 05:58:56 UTC, Andrei Alexandrescu 
wrote:
We already have actor-style via std.concurrency. We also have 
fork-join parallelism via std.parallel. What we need is a 
library for CSP.


The actor-style via std.concurrency is only between 
'heavyweight' threads though, no? Even if lightweight threads 
may be overhyped, part of the appeal of Go and Erlang is that 
one can spawn tens of thousands of threads and it 'just works'. 
It allows the server model of 'one green thread/actor per 
client', which has a certain appeal in its simplicity. Akka 
similarly uses its own lightweight threads, not heavyweight JVM 
threads.


No.  I've had an outstanding pull request to fix this for quite a 
while now.  I think there's a decent chance it will be in the 
next release.  To be fair, that pull request mostly provides the 
infrastructure for changing how concurrency is handled.  A 
fiber-based scheduler backed by a thread pool doesn't exist yet, 
though it shouldn't be hard to write (the big missing piece is 
having a dynamic thread pool).  I was going to try and knock one 
out while on the airplane in a few days.



Message passing between lightweight threads can also be much 
faster than message passing between heavyweight threads; take a 
look at the following message-passing benchmark and compare 
Haskell, Go and Erlang to the languages using OS threads: 
http://benchmarksgame.alioth.debian.org/u64q/performance.php?test=threadring


Thanks for the benchmark.  I didn't have a good reference for 
what kind of performance capabilities to hit, so there are a few 
possible optimizations I've left out of std.concurrency because 
they didn't buy much in my own testing (like a free list of 
message objects).  I may have to revisit those ideas with this 
benchmark in mind and see what happens.


Re: Opportunities for D

2014-07-10 Thread Sean Kelly via Digitalmars-d

On Thursday, 10 July 2014 at 11:03:20 UTC, logicchains wrote:


Reading the code in the pull request [1], for instance, makes 
me wonder how to tell if `spawn()` is spawning a thread or a 
fibre. Can a tid refer to a fibre? If so, why's it called a 
thread ID, and how do I tell if a particular tid refers to a 
thread or fibre? It would be great to have these kinds of 
questions answered in an easily available reference (for 
instance, the documentation for std.concurrency, which 
currently doesn't even mention fibres or vibe.d).


That was a deliberate design decision--you're not supposed to 
know, or care, what it's spawning.  This also allows up to change 
the scheduling algorithm without affecting user code.  That said, 
because statics are thread-local by default, and because 
implementing fiber-local storage in a C-compatible language would 
be difficult, the scheduler is user-configurable.  So there is 
some visibility into this, just not as a part of the normal 
spawn/send/receive flow.


Re: Opportunities for D

2014-07-10 Thread Sean Kelly via Digitalmars-d

On Thursday, 10 July 2014 at 11:19:26 UTC, Dicebot wrote:


Problem is that this is most simple PR to simply add 
message-passing support for fibers. Adding some advanced 
schedulers with worker thread pool can be expected to be done 
on top but.. This small PR has been rotting there for ages with 
pretty much zero attention but from few interested persons.


Yep.  It's been gathering dust but for the occasional request to 
rebase the code.  A better scheduler can be added, but I think 
that should follow this pull request's addition to Phobos.  I 
don't want to block the infrastructure from acceptance because 
people have issues with a complicated scheduler that happened to 
be bundled with it.  Though as it is, one of the pull requests I 
created for Druntime for a Facebook request has sat for months, 
presumably because of a request regarding a documentation 
formatting change that I overlooked.  And I know I'm not alone.  
Robert's struggle with getting std.logger accepted is the stuff 
told to children around the campfire so they don't venture out 
into the dark.


Re: Opportunities for D

2014-07-10 Thread John Colvin via Digitalmars-d

On Thursday, 10 July 2014 at 14:54:51 UTC, Dicebot wrote:

E.g.
https://github.com/D-Programming-Language/phobos/pull/1527 is 
some

apparently work that's just sitting there abandoned.

Hm, slightly OT: is it considered widely acceptable to take 
over such pull requests by reopening rebased one with identical 
content? I presume Boost licensing implies so but not sure 
everyone else expects the same.


I don't see why this would invalidate the licence:

fork the branch that contains the request, rebase to fix any 
conflicts, make any extra commits needed, open a new pull 
request. It's really no different from merging the pull and then 
fixing it afterwards.


Re: Opportunities for D

2014-07-10 Thread Sean Kelly via Digitalmars-d
On Thursday, 10 July 2014 at 14:30:38 UTC, Andrei Alexandrescu 
wrote:


Switching to newer pull requests, there are simple and 
obviously good pull requests that just sit there for anyone to 
pull.


This.  I think pull requests tend to sit because people don't 
feel they have the authority to push the button, and all the 
author can do is ask.  I know I should be a better shepherd of 
Druntime as well, and should have some actual free time in about 
another month where I hope to start catching up.


Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/10/14, 8:29 AM, Sean Kelly wrote:

Robert's struggle with getting std.logger accepted is the stuff told to
children around the campfire so they don't venture out into the dark.


Actually we use his logger at Facebook quite extensively (and happily). 
-- Andrei


Re: Opportunities for D

2014-07-10 Thread Sean Kelly via Digitalmars-d

On Thursday, 10 July 2014 at 15:35:03 UTC, John Colvin wrote:

On Thursday, 10 July 2014 at 14:54:51 UTC, Dicebot wrote:

E.g.
https://github.com/D-Programming-Language/phobos/pull/1527 is 
some

apparently work that's just sitting there abandoned.

Hm, slightly OT: is it considered widely acceptable to take 
over such pull requests by reopening rebased one with 
identical content? I presume Boost licensing implies so but 
not sure everyone else expects the same.


I don't see why this would invalidate the licence:

fork the branch that contains the request, rebase to fix any 
conflicts, make any extra commits needed, open a new pull 
request. It's really no different from merging the pull and 
then fixing it afterwards.


So long as the author's name remains in place in the license 
blurb I think you're pretty much free to do whatever you want 
with the code.  That's the beauty of the Boost license.  It's as 
close to Public Domain as it seems possible to get given the 
vagaries of international law.


I would *love* to have a good networking package in Phobos.  It's 
been my #1 item for basically the entire 10 years I've been using 
D.  It just happens to conflict a bit too much with my 
professional work for me to comfortably make any contribution 
without internal approval, and I don't see that happening unless 
we start using D and want to contribute back to the language.


vibe.d exists now though, and maybe someone could tease it apart 
to get some portion of it in Phobos and leave the rest as a 
third-party extension?  That's being actively developed, and is 
really quite nice, though perhaps not so general-purpose as 
std.net was intended to be.


It seems that most of my active use of D these days is writing 
scripts to talk to our servers for various tasks.  For the most 
part I just use libcurl for that and it's pretty okay, but for 
the things that don't talk HTTP... ugh.


Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/10/14, 7:53 AM, Sean Kelly wrote:

On Wednesday, 9 July 2014 at 21:47:47 UTC, Andrei Alexandrescu wrote:

On 7/9/14, 1:51 PM, Walter Bright wrote:

On 7/9/2014 1:35 PM, Andrei Alexandrescu wrote:

Hmmm... how about using u after that?


Using u after that would either cause an exception to be thrown, or
they'd get T.init as a value. I tend to favor the latter, but of course
those decisions would have to be made as part of the design of Unique.


That semantics would reenact the auto_ptr disaster so probably
wouldn't be a good choice. -- Andrei


The problem with auto_ptr is that people rarely used it for what it was
designed for.  Probably because it was the only smart pointer in the
STL.  As I'm sure you're aware, the purpose of auto_ptr is to explicitly
define ownership transfer of heap data.  For that it's pretty much
perfect, and I use it extensively.  It looks like unique_ptr is pretty
much the same, but with a facelift.  Underneath it still performs
destructive copies, unless I've misread the docs.


Nononono - unique_ptr never moves from lvalues. Also, the educational 
argument for auto_ptr doesn't stand; it was bad design, pure and simple. 
-- Andrei




Re: Opportunities for D

2014-07-10 Thread Andrei Alexandrescu via Digitalmars-d

On 7/10/14, 7:54 AM, Dicebot wrote:

E.g.

https://github.com/D-Programming-Language/phobos/pull/1527 is some
apparently work that's just sitting there abandoned.

Hm, slightly OT: is it considered widely acceptable to take over such
pull requests by reopening rebased one with identical content? I presume
Boost licensing implies so but not sure everyone else expects the same.


That's totally on topic! I think it's fair game to take over pull 
requests of which authors did not respond to repeated pings. -- Andrei


  1   2   3   >