Re: Program logic bugs vs input/environmental errors

2014-09-28 Thread luka8088 via Digitalmars-d

On 28.9.2014. 21:32, Walter Bright wrote:

On 9/28/2014 11:25 AM, bearophile wrote:

Exceptions are often used to help debugging...



https://www.youtube.com/watch?v=hBhlQgvHmQ0


Example exception messages:

Unable to connect to database
Invalid argument count
Invalid network package format

All this messages do not require a stack trace as they do not require 
code fixes, they indicate an issue outside the program itself. If stack 
trace is required then assert should have been used instead.


Or to better put it: can anyone give an example of exception that would 
require stack trace?




Re: Program logic bugs vs input/environmental errors

2014-09-28 Thread luka8088 via Digitalmars-d

On 28.9.2014. 1:15, Walter Bright wrote:

This issue comes up over and over, in various guises. I feel like
Yosemite Sam here:

 https://www.youtube.com/watch?v=hBhlQgvHmQ0

In that vein, Exceptions are for either being able to recover from
input/environmental errors, or report them to the user of the application.

When I say "They are NOT for debugging programs", I mean they are NOT
for debugging programs.

assert()s and contracts are for debugging programs.

After all, what would you think of a compiler that spewed out messages
like this:

> dmd test.d
test.d(15) Error: missing } thrown from dmd/src/parse.c(283)

?

See:

 https://issues.dlang.org/show_bug.cgi?id=13543

As for the programmer wanting to know where the message "missing }" came
from,
li
 grep -r dmd/src/*.c "missing }"

works nicely. I do that sort of thing all the time. It really isn't a
problem.


We has this issue at work (we are working with php). We outputted a 
stack trace for both exceptions and asserts but the lines that should be 
addressed are not always so obvious.


I found a solution and it works great for us. All library code is marked 
appropriately so when stack is outputted it is shadows out (with gray 
color) all the lines in library code and point out first non-library 
line from the top of the stack. In 95% of the time it is the line that 
the programmer should look into. Other 5% is the time when it shows the 
line where programmer is forwarding a call to the library but turns out 
to be ok as it turns out to be much more comprehensible than the entire 
stack. One note worth mentioning is that juniors have much easier time 
understanding which lines concern them, and from that I can only 
conclude that such approach is more intuitive.


Marking is done on namespace level so it can be easily disabled for 
entire namespace.


I think outputting a stack trace for asserts is a must because of that 
5%. And for exceptions I agree completely with your arguments and I 
think that there is no need for stack.


From my experience this has been a good approach and I think is worth 
considering.




Re: RFC: reference counted Throwable

2014-09-22 Thread luka8088 via Digitalmars-d

On 21.9.2014. 22:57, Peter Alexander wrote:

On Sunday, 21 September 2014 at 19:36:01 UTC, Nordlöw wrote:

On Friday, 19 September 2014 at 15:32:38 UTC, Andrei Alexandrescu wrote:

Please chime in with thoughts.


Why don't we all focus our efforts on upgrading the current GC to a
state-of-the GC making use of D's strongly typed memory model before
discussing these things?


GC improvements are critical, but...

"As discussed, having exception objects being GC-allocated is clearly a
large liability that we need to address. They prevent otherwise careful
functions from being @nogc so they affect even apps that otherwise would
be okay with a little litter here and there."

No improvements to the GC can fix this. @nogc needs to be usable,
whether you are a GC fan or not.


I think that what is being suggested is that upgrading GC would 
widespread the point of view on what can and should be done.


For example, now that ranges and mixins exist great ideas comes to mind, 
and without them we can only guess. I think that GC is in the same position.




Re: Memory allocation purity

2014-05-15 Thread luka8088 via Digitalmars-d
On 15.5.2014. 17:24, Andrei Alexandrescu wrote:
> On 5/15/14, 3:31 AM, luka8088 wrote:
>> Yeah, I read all about weak/string purity and I do understand the
>> background. I was talking about strong purity, maybe I should pointed
>> that out.
>>
>> So, to correct myself: As I understood strong purity implies
>> memoization. Am I correct?
> 
> Yes, as long as you don't rely on distinguishing objects by address.
> 
> Purity of allocation is frequently assumed by functional languages
> because without it it would be difficult to get much work done. Then,
> most functional languages make it difficult or impossible to distinguish
> values by their address. In D that's easy. A D programmer needs to be
> aware of that, and I think that's fine.
> 
> 
> Andrei
> 
> 

Hm, this does not seem right. @safe prevents you from taking the address
of of a value, as stated in
http://dlang.org/function.html#safe-functions , shouldn't pure do the same?

Reading again through the @safe docs it seems to me that purity (both
strong and weak) should imply @safe.

I have seen many claims that in D pure means something else from what it
means in functional languages and I think that it is too bad if there is
not going to be functional language alike purity in D. I have not seen
any example of something that can't be forbidden by the compiler if such
support would considered.



Re: Memory allocation purity

2014-05-15 Thread luka8088 via Digitalmars-d
On 15.5.2014. 13:04, Jonathan M Davis via Digitalmars-d wrote:
> On Thu, 15 May 2014 10:48:07 +
> Don via Digitalmars-d  wrote:
> 
>> Yes. 'strong pure' means pure in the way that the functional
>> language crowd means 'pure'.
>> 'weak pure' just means doesn't use globals.
>>
>> But note that "strong purity" isn't an official concept, it was
>> just the terminology I used when explain to Walter what I meant.
>> I don't like the term because it's rather misleading
>> -- in reality you could define a whole range of purity strengths
>> (more than just two).
>> The stronger the purity, the more optimizations you can apply.
> 
> Yeah, I agree. The problem is that it always seems necessary to use the terms
> weak pure to describe the distinction - or maybe I just suck at coming up with
> a better way to describe it than you did initially. Your recent post in this
> thread talking about @noglobal seems to be a pretty good alternate way to
> explain it though. Certainly, the term pure throws everyone off at first.
> 
> - Jonathan M Davis
> 

Yeah, +1.

Or @isolated, as in "isolated from outer scopes".



Re: Memory allocation purity

2014-05-15 Thread luka8088 via Digitalmars-d
On 15.5.2014. 12:48, Don wrote:
> On Thursday, 15 May 2014 at 10:31:47 UTC, luka8088 wrote:
>> On 15.5.2014. 11:45, Don wrote:
>>> On Thursday, 15 May 2014 at 08:14:50 UTC, luka8088 wrote:
>>>> On 15.5.2014. 8:58, Jonathan M Davis via Digitalmars-d wrote:
>>>>> On Thu, 15 May 2014 05:51:14 +
>>>>> via Digitalmars-d  wrote:
>>>>>
>>>>>> Yep, purity implies memoing.
>>>>>
>>>>> No, it doesn't. _All_ that it means when a function is pure is that
>>>>> it cannot
>>>>> access global or static variables unless they can't be changed after
>>>>> being
>>>>> initialized (e.g. they're immutable, or they're const value types),
>>>>> and it
>>>>> can't call any other functions which aren't pure. It means _nothing_
>>>>> else. And
>>>>> it _definitely_ has nothing to do with functional purity.
>>>>>
>>>>> Now, combined with other information, you _can_ get functional purity
>>>>> out it -
>>>>> e.g. if all the parameters to a function are immutable, then it _is_
>>>>> functionally pure, and optimizations requiring functional purity can
>>>>> be done
>>>>> with that function. But by itself, pure means nothing of the sort.
>>>>>
>>>>> So, no, purity does _not_ imply memoization.
>>>>>
>>>>> - Jonathan M Davis
>>>>>
>>>>
>>>> Um. Yes it does. http://dlang.org/function.html#pure-functions
>>>> "functional purity (i.e. the guarantee that the function will always
>>>> return the same result for the same arguments)"
>>>>
>>>> The fact that it should not be able to effect or be effected by the
>>>> global state is not a basis for purity, but rather a consequence.
>>>>
>>>> Even other sources are consistent on this matter, and this is what
>>>> purity by definition is.
>>>
>>>
>>> Please note: D's 'pure' annotation does *not* mean that the function is
>>> pure. It means that it is statically verified to be OK to call it from a
>>> pure function.
>>>
>>> The compiler determines if a function is pure, the programmer never
>>> does.
>>>
>>> There are two things going on here, and they are quite distinct.
>>>
>>> (1) Really the keyword should be something like '@noglobal', rather than
>>> 'pure'. It's called pure for historical reasons. To reduce confusion
>>> I'll call D's pure '@noglobal' and the functional languages pure
>>> '@memoizable'.
>>>
>>> But it turns out that @memoizable isn't actually an interesting
>>> property, whereas '@noglobal' is.
>>>
>>> "No global state" is a deep, transitive property of a function.
>>> "Memoizable" is a superficial supersetextra property which the compiler
>>> can trivially determine from @noglobal.
>>>
>>> Suppose you have function f(), which calls function g().
>>>
>>> If f does not depend on global state, then g must not depend on global
>>> state.
>>>
>>> BUT if f() can be memoizable even if g() is not memoizable.
>>>
>>> This approach used by D enormously increases the number of functions
>>> which can be statically proven to be pure. The nomenclature can create
>>> confusion though.
>>>
>>>
>>> (2) Allowing GC activity inside a @noglobal function does indeed weaken
>>> our ability to memoize.
>>>
>>> The compiler can still perform memoizing operations on most functions
>>> that return GC-allocated memory, but it's more difficult. We don't yet
>>> have data on how much of a problem this is.
>>>
>>> An interesting side-effect of the recent addition of @nogc to the
>>> language, is that we get this ability back.
>>>
>>
>> Yeah, I read all about weak/string purity and I do understand the
>> background. I was talking about strong purity, maybe I should pointed
>> that out.
>>
>> So, to correct myself: As I understood strong purity implies
>> memoization. Am I correct?
> 
> Yes. 'strong pure' means pure in the way that the functional language
> crowd means 'pure'.
> 'weak pure' just means doesn't use globals.
> 
> But note that "strong purity" isn't an official concept, it was just the
> terminology I used when explain to Walter what I meant. I don't like the
> term because it's rather misleading
> -- in reality you could define a whole range of purity strengths (more
> than just two).
> The stronger the purity, the more optimizations you can apply.
> 

Ok. Now it is much clearer, thanks.



Re: Memory allocation purity

2014-05-15 Thread luka8088 via Digitalmars-d
On 15.5.2014. 11:45, Don wrote:
> On Thursday, 15 May 2014 at 08:14:50 UTC, luka8088 wrote:
>> On 15.5.2014. 8:58, Jonathan M Davis via Digitalmars-d wrote:
>>> On Thu, 15 May 2014 05:51:14 +
>>> via Digitalmars-d  wrote:
>>>
>>>> Yep, purity implies memoing.
>>>
>>> No, it doesn't. _All_ that it means when a function is pure is that
>>> it cannot
>>> access global or static variables unless they can't be changed after
>>> being
>>> initialized (e.g. they're immutable, or they're const value types),
>>> and it
>>> can't call any other functions which aren't pure. It means _nothing_
>>> else. And
>>> it _definitely_ has nothing to do with functional purity.
>>>
>>> Now, combined with other information, you _can_ get functional purity
>>> out it -
>>> e.g. if all the parameters to a function are immutable, then it _is_
>>> functionally pure, and optimizations requiring functional purity can
>>> be done
>>> with that function. But by itself, pure means nothing of the sort.
>>>
>>> So, no, purity does _not_ imply memoization.
>>>
>>> - Jonathan M Davis
>>>
>>
>> Um. Yes it does. http://dlang.org/function.html#pure-functions
>> "functional purity (i.e. the guarantee that the function will always
>> return the same result for the same arguments)"
>>
>> The fact that it should not be able to effect or be effected by the
>> global state is not a basis for purity, but rather a consequence.
>>
>> Even other sources are consistent on this matter, and this is what
>> purity by definition is.
> 
> 
> Please note: D's 'pure' annotation does *not* mean that the function is
> pure. It means that it is statically verified to be OK to call it from a
> pure function.
> 
> The compiler determines if a function is pure, the programmer never does.
> 
> There are two things going on here, and they are quite distinct.
> 
> (1) Really the keyword should be something like '@noglobal', rather than
> 'pure'. It's called pure for historical reasons. To reduce confusion
> I'll call D's pure '@noglobal' and the functional languages pure
> '@memoizable'.
> 
> But it turns out that @memoizable isn't actually an interesting
> property, whereas '@noglobal' is.
> 
> "No global state" is a deep, transitive property of a function.
> "Memoizable" is a superficial supersetextra property which the compiler
> can trivially determine from @noglobal.
> 
> Suppose you have function f(), which calls function g().
> 
> If f does not depend on global state, then g must not depend on global
> state.
> 
> BUT if f() can be memoizable even if g() is not memoizable.
> 
> This approach used by D enormously increases the number of functions
> which can be statically proven to be pure. The nomenclature can create
> confusion though.
> 
> 
> (2) Allowing GC activity inside a @noglobal function does indeed weaken
> our ability to memoize.
> 
> The compiler can still perform memoizing operations on most functions
> that return GC-allocated memory, but it's more difficult. We don't yet
> have data on how much of a problem this is.
> 
> An interesting side-effect of the recent addition of @nogc to the
> language, is that we get this ability back.
> 

Yeah, I read all about weak/string purity and I do understand the
background. I was talking about strong purity, maybe I should pointed
that out.

So, to correct myself: As I understood strong purity implies
memoization. Am I correct?



Re: Memory allocation purity

2014-05-15 Thread luka8088 via Digitalmars-d
On 15.5.2014. 11:35, Jonathan M Davis via Digitalmars-d wrote:
> On Thu, 15 May 2014 10:14:48 +0200
> luka8088 via Digitalmars-d  wrote:
> 
>> On 15.5.2014. 8:58, Jonathan M Davis via Digitalmars-d wrote:
>>> On Thu, 15 May 2014 05:51:14 +
>>> via Digitalmars-d  wrote:
>>>
>>>> Yep, purity implies memoing.
>>>
>>> No, it doesn't. _All_ that it means when a function is pure is that
>>> it cannot access global or static variables unless they can't be
>>> changed after being initialized (e.g. they're immutable, or they're
>>> const value types), and it can't call any other functions which
>>> aren't pure. It means _nothing_ else. And it _definitely_ has
>>> nothing to do with functional purity.
>>>
>>> Now, combined with other information, you _can_ get functional
>>> purity out it - e.g. if all the parameters to a function are
>>> immutable, then it _is_ functionally pure, and optimizations
>>> requiring functional purity can be done with that function. But by
>>> itself, pure means nothing of the sort.
>>>
>>> So, no, purity does _not_ imply memoization.
>>>
>>> - Jonathan M Davis
>>>
>>
>> Um. Yes it does. http://dlang.org/function.html#pure-functions
>> "functional purity (i.e. the guarantee that the function will always
>> return the same result for the same arguments)"
>>
>> The fact that it should not be able to effect or be effected by the
>> global state is not a basis for purity, but rather a consequence.
>>
>> Even other sources are consistent on this matter, and this is what
>> purity by definition is.
>>
> 
> The reread the paragraph at the top of the section of the documentation
> that you linked to:
> 
> "Pure functions are functions which cannot access global or static,
> mutable state save through their arguments. This can enable
> optimizations based on the fact that a pure function is guaranteed to
> mutate nothing which isn't passed to it, and in cases where the
> compiler can guarantee that a pure function cannot alter its arguments,
> it can enable full, functional purity (i.e. the guarantee that the
> function will always return the same result for the same arguments)."
> 
> That outright says that pure only _can_ enable functional purity - in
> particular when the compiler is able to guarantee that the function
> cannot mutate its arguments. pure itself however means nothing of the
> sort. The fact that pure functions cannot access global state _is_ the
> basis for functional purity when combined with parameters that
> arguments cannot be mutated.

I am aware of weak/strong purity. I am only talking about strong purity now.

To quote bearophile:

bool randomBit() pure nothrow @safe {
return (new int[1].ptr) > (new int[1].ptr);
}
void main() {}

"Pure functions are functions which cannot access global or static,
mutable state save through their arguments." - no objections here

"This can enable optimizations based on the fact that a pure function is
guaranteed to mutate nothing which isn't passed to it, and in cases
where the compiler can guarantee that a pure function cannot alter its
arguments, it can enable full, functional purity (i.e. the guarantee
that the function will always return the same result for the same
arguments)." - no arguments where passed to the function, it should
always return the same result

> 
> If you get hung up on what the concept of functional purity is or what
> you thought pure was before using D, then you're going to have a hard
> time understanding what pure means in D. And yes, it's a bit weird, but
> it comes from the practical standpoint of how to make functional purity
> possible without being too restrictive to be useful. So, it really
> doesn't matter what other sources say about what purity means. That's
> not what D's pure means. D's pure is just a building block for what
> purity normally means. It makes it so that the compiler can detect
> functional purity and then optimize based on it, but it doesn't in and
> of itself have anything to do with functional purity.
> 
> If the documentation isn't getting that across, then I guess that it
> isn't clear enough. But I would have thought that the part that said
> "and in cases where the compiler can guarantee that a pure function
> cannot alter its arguments, it can enable full, functional purity"
> would have made it clear that D's pure is _not_ functionally pure by
> itself. The first part of the paragraph says what pure really means:
> "Pure functions

Re: Memory allocation purity

2014-05-15 Thread luka8088 via Digitalmars-d
On 15.5.2014. 8:58, Jonathan M Davis via Digitalmars-d wrote:
> On Thu, 15 May 2014 05:51:14 +
> via Digitalmars-d  wrote:
> 
>> Yep, purity implies memoing.
> 
> No, it doesn't. _All_ that it means when a function is pure is that it cannot
> access global or static variables unless they can't be changed after being
> initialized (e.g. they're immutable, or they're const value types), and it
> can't call any other functions which aren't pure. It means _nothing_ else. And
> it _definitely_ has nothing to do with functional purity.
> 
> Now, combined with other information, you _can_ get functional purity out it -
> e.g. if all the parameters to a function are immutable, then it _is_
> functionally pure, and optimizations requiring functional purity can be done
> with that function. But by itself, pure means nothing of the sort.
> 
> So, no, purity does _not_ imply memoization.
> 
> - Jonathan M Davis
> 

Um. Yes it does. http://dlang.org/function.html#pure-functions
"functional purity (i.e. the guarantee that the function will always
return the same result for the same arguments)"

The fact that it should not be able to effect or be effected by the
global state is not a basis for purity, but rather a consequence.

Even other sources are consistent on this matter, and this is what
purity by definition is.



Re: range behaviour

2014-05-13 Thread luka8088 via Digitalmars-d
On 13.5.2014. 19:40, H. S. Teoh via Digitalmars-d wrote:
> On Tue, May 13, 2014 at 01:29:32PM -0400, Steven Schveighoffer via 
> Digitalmars-d wrote:
> [...]
> Even in this case, I'd put an in-contract on f2 that verifies that the
> range is indeed non-empty:
> 
>   ...
>   void f2(R)(R r)
>   if (isInputRange!R)
>   in { assert(!r.empty); }
>   body {
>   doSomething(r.front);
>   }
> 
> [...]

This is a potential issue because if it turns out that empty _must_ be
called than the author could put the front population logic inside empty.

Consider:

struct R {
  bool empty () { front = 1; return false; }
  int front = 0;
  void popFront () { front = 0; }
}

This is a valid code if empty _must_ be called, but it will behave
differently if passed to f2 in case asserts are compiled out. In case
asserts are compiled out empty is never called and front in never populated.

Because of this I think that it is necessary to document range behavior.



Re: More radical ideas about gc and reference counting

2014-05-09 Thread luka8088 via Digitalmars-d
On 6.5.2014. 20:10, Walter Bright wrote:
> On 5/6/2014 10:47 AM, Manu via Digitalmars-d wrote:
>> On 7 May 2014 01:46, Andrei Alexandrescu via Digitalmars-d
>> I'm not even sure what the process it... if I go through and "LGTM" a
>> bunch of pulls, does someone accept my judgement and click the merge
>> button?
>> You can see why I might not feel qualified to do such a thing?
> 
> You don't need to be qualified (although you certainly are) to review
> PR's. The process is anyone can review/comment on them.
> Non-language-changing PR's can be pulled by anyone on "Team DMD".
> Language changing PR's need to be approved by Andrei and I.
> 
> "Team DMD" consists of people who have a consistent history of doing
> solid work reviewing PR's.
> 

Interesting. This really needs to pointed out on the site.



Re: From slices to perfect imitators: opByValue

2014-05-07 Thread luka8088 via Digitalmars-d
On 8.5.2014. 5:58, Andrei Alexandrescu wrote:
> 
> This magic of T[] is something that custom ranges can't avail themselves
> of. In order to bring about parity, we'd need to introduce opByValue
> which (if present) would be automatically called whenever the object is
> passed by value into a function.
> 
> This change would allow library designers to provide good solutions to
> making immutable and const ranges work properly - the way T[] works.
> 

Looks very similar to some kind of opImplicitConvert.

http://forum.dlang.org/thread/teddgvbtmrxumffrh...@forum.dlang.org
http://forum.dlang.org/thread/gq0fj7$4av$1...@digitalmars.com

Maybe it would be better to have a more general solution instead of
special case solution, if there is no reason against implicit conversion
of course.



Re: TLBB: The Last Big Breakage

2014-03-16 Thread luka8088
On 16.3.2014. 5:08, Andrei Alexandrescu wrote:
> D1's approach to multithreading was wanting. D2 executed a big departure
> from that with the shared qualifier and the default-thread-local
> approach to data.
> 
> We think this is a win, but D2 inherited a lot of D1's thread-related
> behavior by default, and some of the rules introduced by TDPL
> (http://goo.gl/9gtH0g) remained in the "I have a dream" stage.
> 
> Fixing that has not gained focus until recently, when e.g.
> https://github.com/D-Programming-Language/dmd/pull/3067 has come about.
> There is other more stringent control of shared members, e.g.
> "synchronized" is all or none, "synchronized" only makes direct member
> variables unshared, and more.
> 
> This will statically break code. It will refuse to compile code that is
> incorrect, but also plenty of code that is correct; the compiler will
> demand extra guarantees from user code, be they in the form of casts and
> stated assumptions.
> 
> I believe this is a bridge we do need to cross. One question is how we
> go about it: all at once, or gradually?
> 
> 
> Andrei
> 

+1 on fixing this!



Re: null dereference

2014-03-16 Thread luka8088
On 16.3.2014. 0:22, Jonathan M Davis wrote:
> On Saturday, March 15, 2014 11:05:42 luka8088 wrote:
>> I was thinking and I am not sure about the reason for not having some
>> king of safeguard for null dereferencing in version(assert)/debug builds.
>>
>> One possible reason that comes to mind is that it somewhat affects
>> performance but should this really be an issue in version(assert)/debug
>> build? Especially given the benefit of having file/line number and stack
>> information outputted.
> 
> Essentially what it comes down to is the fact that because the OS already 
> detects null pointer dereferences for you (hence the segfault or access 
> violation that you get when it occurs), Walter considers it unnecessary. If 
> you want more details, look at the resulting core dump in a debugger or run 
> the program in a debugger to begin with. Now, that obviously doesn't always 
> work, which is part of why many folks argue in favor of adding additional 
> checks, but that's Walter's position.
> 
> I believe that there was some work done to make it so that druntime would 
> detect a segfault and print a stacktrace when that happens, but it's not 
> enabled normally, and I don't know quite what state it's in. That would 
> probably be the ideal solution though, since it gives you the stacktrace 
> without requiring additional checks.
> 
> - Jonathan M Davis
> 


That is great. At least in theory, but at the end of the day the
following code still performs invalid memory operation and ends up being
killed by the os without any change of recovery.

void main () @safe {
  Object o = null;
  o.toHash();
}


I saw multiple failed attempts to implement handlers for such operation
and throwing exceptions instead. I don't remember why, and I can't find
the post, but there where claims that it will never be turned on by default.

etc.linux.memoryerror.registerMemoryErrorHandler(); that Adam pointed
out is great but somewhat looses it's point if it not turned on by default.


Consider a simple scenario. I am writing a vibed application, and I
deploy it to a linux server. Now for testing purposes I make a
non-release build and add:

try { ... } catch (Throwable t) { logCrash(t); }

Unfortunately null deference happened and I have no info why. For
argument purposes let's say that this happens rarely and I am not able
to reproduce it on my machine. Now I go to the docs looking for a way to
get more info out of this. And there is nothing about it in the docs.



Re: null dereference

2014-03-15 Thread luka8088
On 15.3.2014. 14:34, Ary Borenszweig wrote:
> On 3/15/14, 8:25 AM, bearophile wrote:
>> luka8088:
>>
>>> I was thinking and I am not sure about the reason for not having some
>>> king of safeguard for null dereferencing in version(assert)/debug
>>> builds.
>>
>> Eventually reference deference in D will be guarded by an assert in
>> non-release builds. This desire is a raising tide that eventually can't
>> be stopped.
>>
>> Bye,
>> bearophile
> 
> Really? I thought Walter was against this. He always says you can fire
> up a debugger and check where the dereference occurred.

Hm, that is true, but i think it should be a default behavior in
version(assert).

I saw many discussions on this topic and many arguments but I am still
not able to digest that the following produces invalid memory operation:

void main () @safe {
  Object o = null;
  o.toHash();
}




Re: null dereference

2014-03-15 Thread luka8088
On 15.3.2014. 16:37, Adam D. Ruppe wrote:
> There is a hidden module on Linux which can activate this, sort of:
> 
> void main() {
>// these two lines turn it on
>import etc.linux.memoryerror;
>registerMemoryErrorHandler();
> 
>Object o = null;
>o.toString(); // trigger it here
> }
> 
> etc.linux.memoryerror.NullPointerError@src/etc/linux/memoryerror.d(325):
> 
> ../test56(void etc.linux.memoryerror.sigsegvDataHandler()+0xb) [0x805d44b]
> ../test56(_Dmain+0xa) [0x805c8ea]
> ../test56(void rt.dmain2._d_run_main(int, char**, extern (C) int
> function(char[][])*).runAll().void __lambda1()+0x10) [0x805cb58]
> ../test56(void rt.dmain2._d_run_main(int, char**, extern (C) int
> function(char[][])*).tryExec(scope void delegate())+0x18) [0x805cad0]
> ../test56(void rt.dmain2._d_run_main(int, char**, extern (C) int
> function(char[][])*).runAll()+0x27) [0x805cb1f]
> ../test56(void rt.dmain2._d_run_main(int, char**, extern (C) int
> function(char[][])*).tryExec(scope void delegate())+0x18) [0x805cad0]
> ../test56(_d_run_main+0x117) [0x805ca67]
> ../test56(main+0x14) [0x805c90c]
> /lib/libc.so.6(__libc_start_main+0xe6) [0xf75f3b86]
> ../test56() [0x805c831]
> 
> 
> 
> 
> As you can see there, the top line doesn't really help much, it just
> lists the  druntime module, but the stack trace can help: it shows
> _Dmain in there, and if you add other functions, you can see them too.
> 
> So still not quite a file+line number for the exact place (you can get
> that in a debugger fairly easily though) but helps find where it is,
> especially if you have fairly small functions.

I was not aware of this. Thanks!



Re: double.init is nan ..?

2014-03-15 Thread luka8088
On 14.3.2014. 17:04, Etienne wrote:
> I'm trying to compare two doubles as part of a cache framework. To put
> it simply, double.init == double.init ... is false?

Note that you should not compare floating point types for equality!

http://www.parashift.com/c++-faq/floating-point-arith.html



Re: null dereference

2014-03-15 Thread luka8088
On 15.3.2014. 12:25, bearophile wrote:
> luka8088:
> 
>> I was thinking and I am not sure about the reason for not having some
>> king of safeguard for null dereferencing in version(assert)/debug builds.
> 
> Eventually reference deference in D will be guarded by an assert in
> non-release builds. This desire is a raising tide that eventually can't
> be stopped.
> 
> Bye,
> bearophile

I am very glad to hear that! It is very frustrating for a program to
segfault without giving any information (on linux) even in debug mode.




null dereference

2014-03-15 Thread luka8088
I was thinking and I am not sure about the reason for not having some
king of safeguard for null dereferencing in version(assert)/debug builds.

One possible reason that comes to mind is that it somewhat affects
performance but should this really be an issue in version(assert)/debug
build? Especially given the benefit of having file/line number and stack
information outputted.




module program;

import std.stdio;

void main () {

  A a1 = new A();

  // auto inserted by the compiler before accessing object member
  // on version(assert) (or maybe on debug?)
  version(assert)
if (a1 is null)
  throw new NullDereferenceError("Null Dereference");

  a1.f();

  A a2;

  // auto inserted by the compiler before accessing object member
  // on version(assert) (or maybe on debug?)
  version(assert)
if (a2 is null)
  throw new NullDereferenceError("Null Dereference");

  a1.f();

}

class A {

  void f () {
writeln("A.f called");
  }

}

class NullDereferenceError : Error {
  this (string msg, string file = __FILE__, size_t line = __LINE__) {
super(msg, file, line);
  }
}


Re: Final by default?

2014-03-13 Thread luka8088
On 13.3.2014. 0:48, Walter Bright wrote:
> On 3/12/2014 4:01 PM, luka8088 wrote:
>> How do you nearly lose a client over a change in a development branch
>> which was never a part of any release? (or am I mistaken?)
> 
> The change went into a release.
I see, that indeed is an issue.



Re: Final by default?

2014-03-12 Thread luka8088
On 12.3.2014. 23:50, Walter Bright wrote:
> But we nearly lost a major client over it.

How do you nearly lose a client over a change in a development branch
which was never a part of any release? (or am I mistaken?)

You seem to have a very demanding clients :)

On a side thought, maybe there should also be a stable branch?



Re: Testing some singleton implementations

2014-02-10 Thread luka8088
On 10.2.2014. 13:44, luka8088 wrote:
> On 10.2.2014. 10:54, Andrej Mitrovic wrote:
>> On 2/9/14, luka8088  wrote:
>>>   private static __gshared typeof(this) instance_;
>>
>> Also, "static __gshared" is really meaningless here, it's either
>> static (TLS), or globally shared, either way it's not a class
>> instance, so you can type __gshared alone here. Otherwise I'm not sure
>> what the semantics of a per-class-instance __gshared field would be,
>> if that can exist.
>>
> 
> "static" does not meat it must be tls, as "static shared" is valid.
> 
> I just like to write that it is static and not shared. I know that
> __gshared does imply static but this implication is not intuitive to me
> so I write it explicitly.
> 
> For example, I think that the following code should output 5 and 6 (as
> it would it __gshared did not imply static):
> 
> 
> module program;
> 
> import std.stdio;
> import core.thread;
> 
> class A {
>   __gshared int i;
> }
> 
> void main () {
> 
>   auto a1 = new A();
>   auto a2 = new A();
> 
>   (new Thread({
> a1.i = 5;
> a2.i = 6;
> (new Thread({
>   writeln(a1.i);
>   writeln(a2.i);
> })).start();
>   })).start();
> 
> }
> 
> 
> But in any case, this variable is just __gshared.
> 

Um actually this makes no sense. But anyway I mark it static.



Re: Testing some singleton implementations

2014-02-10 Thread luka8088
On 10.2.2014. 10:59, Andrej Mitrovic wrote:
> On 2/9/14, luka8088  wrote:
>> dmd -release -inline -O -noboundscheck -unittest -run singleton.d
>>
>> Test time for LockSingleton: 901 msecs.
>> Test time for SyncSingleton: 20.75 msecs.
>> Test time for AtomicSingleton: 169 msecs.
>> Test time for FunctionPointerSingleton: 7.5 msecs.
> 
> C:\dev\code\d_code>test_dmd
> Test time for LockSingleton: 438 msecs.
> Test time for SyncSingleton: 6.25 msecs.
> Test time for AtomicSingleton: 8 msecs.
> Test time for FunctionPointerSingleton: 5 msecs.
> 
> C:\dev\code\d_code>test_ldc
> Test time for LockSingleton: 575.5 msecs.
> Test time for SyncSingleton: 5 msecs.
> Test time for AtomicSingleton: 3 msecs.
> Test time for FunctionPointerSingleton: 5.25 msecs.
> 
> It seems it makes a tiny bit of difference for DMD, but LDC still
> generates better codegen for the atomic version.
> 

Could it be that TLS is slower in LLVM?



Re: Testing some singleton implementations

2014-02-10 Thread luka8088
On 10.2.2014. 10:54, Andrej Mitrovic wrote:
> On 2/9/14, luka8088  wrote:
>>   private static __gshared typeof(this) instance_;
> 
> Also, "static __gshared" is really meaningless here, it's either
> static (TLS), or globally shared, either way it's not a class
> instance, so you can type __gshared alone here. Otherwise I'm not sure
> what the semantics of a per-class-instance __gshared field would be,
> if that can exist.
> 

"static" does not meat it must be tls, as "static shared" is valid.

I just like to write that it is static and not shared. I know that
__gshared does imply static but this implication is not intuitive to me
so I write it explicitly.

For example, I think that the following code should output 5 and 6 (as
it would it __gshared did not imply static):


module program;

import std.stdio;
import core.thread;

class A {
  __gshared int i;
}

void main () {

  auto a1 = new A();
  auto a2 = new A();

  (new Thread({
a1.i = 5;
a2.i = 6;
(new Thread({
  writeln(a1.i);
  writeln(a2.i);
})).start();
  })).start();

}


But in any case, this variable is just __gshared.



Re: Testing some singleton implementations

2014-02-10 Thread luka8088
On 10.2.2014. 10:52, Andrej Mitrovic wrote:
> On 2/9/14, luka8088  wrote:
>> What about swapping function pointer so the check is done only once per
>> thread? (Thread is tldr so I am sorry if someone already suggested this)
> 
> Interesting solution for sure.
> 
>>   // tls
>>   @property static typeof(this) function () get;
> 
> This confused me for a second since @property is meaningless for variables. :>
> 

Yeah. My mistake. It should be removed.



Re: Testing some singleton implementations

2014-02-09 Thread luka8088
On 9.2.2014. 19:51, Stanislav Blinov wrote:
> On Sunday, 9 February 2014 at 18:06:46 UTC, Martin Nowak wrote:
>> On 02/09/2014 01:20 PM, luka8088 wrote:
>>> class FunctionPointerSingleton {
>>>
>>>   private static __gshared typeof(this) instance_;
>>>
>>>   // tls
>>>   @property static typeof(this) function () get;
>> You don't even need to make this TLS, right?
> 
> I don't follow. get should be TLS, as a replacement for SyncSingleton's
> _instantiated TLS bool.

It is tls and it needs to be tls because one thread could be replacing
where get points to while another is trying to access it. It's either
tls or putting some synchronization above it which would break the whole
idea of executing synchronized block only once per thread.



Re: Testing some singleton implementations

2014-02-09 Thread luka8088
On 9.2.2014. 15:09, Stanislav Blinov wrote:
> On Sunday, 9 February 2014 at 12:20:54 UTC, luka8088 wrote:
> 
>> What about swapping function pointer so the check is done only once per
>> thread? (Thread is tldr so I am sorry if someone already suggested this)
> 
> That is an interesting idea indeed, though it seems to be faster only
> for dmd. I haven't studied the assembly yet, but with LDC I don't see
> any noticeable difference between SyncSingleton and
> FunctionPointerSingleton.

I got it while writing code for dynamic languages (especially
javascript). Thought came that instead of checking for something that
you know will always have the same result just remove that piece of code
and voila :)



Re: Testing some singleton implementations

2014-02-09 Thread luka8088
On 31.1.2014. 9:25, Andrej Mitrovic wrote:
> There was a nice blog-post about implementing low-lock singletons in D, here:
> http://davesdprogramming.wordpress.com/2013/05/06/low-lock-singletons/
> 
> One suggestion on Reddit was by dawgfoto (I think this is Martin
> Nowak?), to use atomic primitives instead:
> http://www.reddit.com/r/programming/comments/1droaa/lowlock_singletons_in_d_the_singleton_pattern/c9tmz07
> 
> I wanted to benchmark these different approaches. I was expecting
> Martin's implementation to be the fastest one, but on my machine
> (Athlon II X4 620 - 2.61GHz) the implementation in the blog post turns
> out to be the fastest one. I'm wondering whether my test case is
> flawed in some way. Btw, I think we should put an implementation of
> this into Phobos.
> 
> The timings on my machine:
> 
> Test time for LockSingleton: 542 msecs.
> Test time for SyncSingleton: 20 msecs.
> Test time for AtomicSingleton: 755 msecs.
> 

What about swapping function pointer so the check is done only once per
thread? (Thread is tldr so I am sorry if someone already suggested this)

--

class FunctionPointerSingleton {

  private static __gshared typeof(this) instance_;

  // tls
  @property static typeof(this) function () get;

  static this () {
get = {
  synchronized {
if (instance_ is null)
  instance_ = new typeof(this)();
get = { return instance_; };
return instance_;
  }
};
  }

}

--

dmd -release -inline -O -noboundscheck -unittest -run singleton.d

Test time for LockSingleton: 901 msecs.
Test time for SyncSingleton: 20.75 msecs.
Test time for AtomicSingleton: 169 msecs.
Test time for FunctionPointerSingleton: 7.5 msecs.

I don't have such a muscular machine xD



Re: Idea #1 on integrating RC with GC

2014-02-06 Thread luka8088
On 5.2.2014. 0:51, Andrei Alexandrescu wrote:
> Consider we add a library slice type called RCSlice!T. It would have the
> same primitives as T[] but would use reference counting through and
> through. When the last reference count is gone, the buffer underlying
> the slice is freed. The underlying allocator will be the GC allocator.
> 
> Now, what if someone doesn't care about the whole RC thing and aims at
> convenience? There would be a method .toGC that just detaches the slice
> and disables the reference counter (e.g. by setting it to uint.max/2 or
> whatever).
> 
> Then people who want reference counting say
> 
> auto x = fun();
> 
> and those who don't care say:
> 
> auto x = fun().toGC();
> 
> 
> Destroy.
> 
> Andrei

Here is a thought:

Let's say we have class A and class B, and class A accepts references to
B as children:

class A {
  B child1;
  B child2;
  B child3;
}

I think that the ultimate goal is to allow the user to choose between
kinds of memory management, especially between automatic and manual. The
problem here is that class A needs to be aware whether memory management
is manual or automatic. And it seems to me that a new type qualifier is
a way to go:

class A {
  garbageCollected(B) child1;
  referenceCounted(B) child2;
  manualMemory(B) child3;
}

Now suppose we want to have only one child but we want to support
compatibility with other kinds of memory management:

class A {
  manualMemory(B) child;

  this (B newChild) {
child = newChild.toManualMemory();
  }

  this (referenceCounted(B) newChild) {
child = newChild.toManualMemory();
  }

  this (manualMemory(B) newChild) {
child = newChild;
  }

  ~this () {
delete child;
  }

}

This way we could write code that supports multiple models, and let the
user choose which one to use. The this that I would like to point out is
that this suggestion would work with existing code as garbageCollected
memory management model would be a default:

auto b = new B();
auto a = new A(b);

Another thing to note is that in this way a garbage collector would know
that we now have two references to one object (instance of class B). One
is variable b and another is child in object a. And because of the
notation garbage collector is aware that if could free this object when
variable b goes out of scope but it should not do it because there is a
still a manually managed reference to that object.

I am sure that there are many more possible loopholes but maybe it will
give someone a better idea :)



Re: Compiling dmd on Windows

2014-02-01 Thread luka8088
On 1.2.2014. 10:40, luka8088 wrote:
> On 1.2.2014. 9:13, Paulo Pinto wrote:
>> Hi,
>>
>> is there any page on how to compile the whole dmd, druntime and phobos
>> on Windows?
>>
>> I am facing a few issues with me hacking around win32.mak files, related
>> to tools location, missing tools(detab) and expected UNIX tools (e.g. cp
>> instead of copy).
>>
>> -- 
>> Paulo
> 
> https://dl.dropboxusercontent.com/u/18386187/contribute.html
> 

Just a disclaimer (as I see that author is not pointer out). This was
not written by me, and I found this link somewhere on the newsgroup some
time ago.



Re: extend "in" to all array types

2014-01-15 Thread luka8088
On 15.1.2014. 16:30, pplantinga wrote:
> In python, I really like the ability to check if an element is in an array:
> 
> if x in array:
>   # do something
> 
> D has this, but only for associative arrays. Is there any chance we
> could extend this to every kind of array?

D array uses this operator for checking against array keys, not values.



Re: [OT] Efficient file structure for very large lookup tables?

2013-12-17 Thread luka8088
On 17.12.2013. 20:08, H. S. Teoh wrote:
> Another OT thread to pick your brains. :)
> 
> What's a good, efficient file structure for storing extremely large
> lookup tables? (Extremely large as in > 10 million entries, with keys
> and values roughly about 100 bytes each.) The structure must support
> efficient adding and lookup of entries, as these two operations will be
> very frequent.
> 
> I did some online research, and it seems that hashtables perform poorly
> on disk, because the usual hash functions cause random scattering of
> related data (which are likely to be access with higher temporal
> locality), which incurs lots of disk seeks.
> 
> I thought about B-trees, but they have high overhead (and are a pain to
> implement), and also only exhibit good locality if table entries are
> accessed sequentially; the problem is I'm working with high-dimensional
> data and the order of accesses is unlikely to be sequential. However,
> they do exhibit good spatial locality in higher-dimensional space (i.e.,
> if entry X is accessed first, then the next entry Y is quite likely to
> be close to X in that space).  Does anybody know of a good data
> structure that can take advantage of this fact to minimize disk
> accesses?
> 
> 
> T
> 

sqlite file format seems to be fairly documented:
http://www.sqlite.org/fileformat.html



Re: global vs context variable

2013-12-11 Thread luka8088
On 11.12.2013. 15:47, monarch_dodra wrote:
> On Wednesday, 11 December 2013 at 12:58:54 UTC, luka8088 wrote:
>> Yeah, and it always keeps pooping right up!
> 
> I hate it when issues just keep pooping up. So rude!

Lol, popping xD


Re: global vs context variable

2013-12-11 Thread luka8088
On 11.12.2013. 12:14, Shammah Chancellor wrote:
> On 2013-12-11 08:21:35 +0000, luka8088 said:
> 
>> Examples using such library:
>>
>> void writeOutput () {
>>   writeln("example output");
>> }
>>
>> void main () {
>>
>>   writeOutput();
>>
>>   standardOutputContext(file("example.txt"), {
>> writeOutput();
>>   });
>>
>> }
> 
> 
> What does this method have over just using:
> 
> with(file("example.txt"))
> {
> writeln("Foo");
> }
> 

It works with deep nesting without the need to pass the context as a
function argument:

void writeOutput () {
  writeln("example output");
}

void f2 () {
  writeOutput();
}

void f1 () {
  f2();
}

void main () {

  f1();

  standardOutputContext(file("example.txt"), {
f1();
  });
}



Re: global vs context variable

2013-12-11 Thread luka8088
On 11.12.2013. 10:53, QAston wrote:
> 
> This issue is probably nearly as old as programming itself.
> There are several sollutions already developed: dependency injection
> (requires complex configuration rules), manually passing deps as args
> (cumbersome), service locator or global variables.

Yeah, and it always keeps pooping right up!

> 
> Your sollution as far as I understand it relies on swapping a global
> variable when inside a context and restoring it afterwards. While this
> would be perfectly fine in environment where code is executed in OS
> threads, there will be a problem in the case of Vibe.d which uses
> fibers. But I guess the problem is solvable there aswell.

I was thinking about such issues and think they are all solvable. I
concluded that I should present this issues as simple and possible to
check out the general interest first.

> 
> Just to note - you have to bind to globals at some point - what if
> someone wants to swap writeln function whith his own (for example to
> call logger, or for whatever reason)? Or he may want to swap dbContext
> :).In my practice I make swappable only things I think I'll need to swap
> in future.

Yes. Not only that swapping would need to permitted but also only same
type values could be swapped. D makes sure of that.



Re: global vs context variable

2013-12-11 Thread luka8088
On 11.12.2013. 9:30, monarch_dodra wrote:
> 
> "write" is really just a global helper/shortcut function that calls
> "stdout.write". If the user wants to customize this, then it's really no
> more complicated than doing a write to a named stream, which represents
> the global out, which may or may not be stdout:
> 
> auto myGlobalOut = stdout;
> myGlobalOut.write(); //Writes to console (technically, "standard output")
> 
> //Change the global output context
> myGlobalOut = File("out.txt", "w");
> myGlobalOut.write(); //Writes to file

Yes. That is exactly what this kind of approach is all about. In
practice I found a need to temporarily change myGlobalOut while keeping
in mind that recursion could be used and in most cases a stack was
required. You also need to make sure that stack is pop'ed properly (even
in case of throwing) so generally there is some work to be done. I think
that this kind of approach would be much more user friendly if a library
support was introduced.

> 
> 
> 
> I'm not really sure I understood the rest of what you posted though, so
> I can't make any comments on that.

Take MVC for example. Most frameworks either support global variables or
setting a controller properties for stuff that needs to be accessible
globally. I have seen database connections being a property of a
controller, which if you think about it makes no sense, but it was the
most pragmatical way to do it in order to make database connection
"globally" accessible without setting is as a global variable.



global vs context variable

2013-12-11 Thread luka8088
Hi everyone!

I would like to address the issue of global variables (or states). In
general my opinion is that they are bad solely because they (in most
cases) lack the ability of alternative values (or states) or ability to
alter them in user friendly way.

For example, take write function from std.stdio. For a third party
function that uses write to output to screen user is unable to redirect
that output to, for example, file without altering third party
function's body. Or if there is a way it rarely user friendly. For
example, in php there are function for buffering output but you have to
manually start buffering and manually stop it. In python active output
is a global variable and you are able to replace it with a custom file
stream but you are responsible maintain a custom stack in case of
recursion and switching it back to default output when no longer needed.
There are also examples of loggers, database connections, etc.

I have a proposal to generalize this issue. Attached is a example
library that implements context based approach and I would like to see
this kind of library in phobos (std.context). It is very simple and yet
in my experience it has shown to be very useful.

Examples using such library:

void writeOutput () {
  writeln("example output");
}

void main () {

  writeOutput();

  standardOutputContext(file("example.txt"), {
writeOutput();
  });

}

MVC example:

void databaseToView () {
  auto result = db.query("select;");
  view.populate(result);
}

void myAction () {

  auto customView = new View();

  viewContext(customView, {
databaseToView();
  });

  view.regionA.append(customView);

}

void main () {

  dbContext(defaultDbConnection {
viewContext(defaultView, {
  myAction();
});
  });

}


I would like to add this to phobos and document it but I would like to
know if this is desirable at all and to get a community feedback.

Thoughts?

module context;

import std.traits;

class context (elementType) {

  private static elementType[] stack;
  private static elementType current;

  static auto opCall (callbackType) (callbackType f) if 
(isCallable!callbackType) {
return opCall(elementType.init, f);
  }

  static auto opCall (callbackType) (elementType initValue, callbackType f) if 
(isCallable!callbackType) {
enter(initValue);
scope(exit) exit();
return f();
  }

  static void enter () {
enter(elementType.init);
  }

  static void enter (elementType initValue) {
if (stack.length > 0)
  stack[$ - 1] = current;
stack ~= initValue;
current = stack[$ - 1];
  }

  static exit () {
stack = stack[0 .. $ - 1];
if (stack.length > 0)
  current = stack[$ - 1];
  }

  @property static bool isSet () {
return stack.length > 0;
  }

}

unittest {

  alias context!int myCounterContext;
  alias myCounterContext.current myCounter;

  myCounterContext(1, {
myCounterContext(2, {
  myCounterContext(3, {
assert(myCounter == 3);
  });
  assert(myCounter == 2);
});
assert(myCounter == 1);
  });

}

unittest {

  alias context!int myCounterContext;
  alias myCounterContext.current myCounter;

  void f1 () {
assert(myCounter == 1);
  }

  myCounterContext(1, {
assert(myCounter == 1);
f1();
  });

}

unittest {

  alias context!int myCounterContext;
  alias myCounterContext.current myCounter;

  myCounterContext({
assert(myCounter == 0);
myCounter = 1;

myCounterContext({
  assert(myCounter == 0);
  myCounter = 2;

  myCounterContext({
assert(myCounter == 0);
myCounter = 3;
assert(myCounter == 3);
  });

  myCounterContext({
assert(myCounter == 0);
myCounter = 4;
assert(myCounter == 4);
  });

  assert(myCounter == 2);
});

assert(myCounter == 1);
  });

}

unittest {

  alias context!(Object) myCounterContext;
  alias myCounterContext.current myCounter;

  myCounterContext({
assert(myCounter is null);

myCounterContext({
  assert(myCounter is null);
  myCounter = new Object();
  assert(myCounter !is null);
});

assert(myCounter is null);
myCounter = new Object();

myCounterContext({
  assert(myCounter is null);
  myCounter = new Object();
  assert(myCounter !is null);
});

assert(myCounter !is null);
  });

}

unittest {

  struct counterA { int value; alias value this; }
  struct counterB { int value; alias value this; }

  alias context!(counterA) myCounterAContext;
  alias myCounterAContext.current myCounterA;

  alias context!(counterB) myCounterBContext;
  alias myCounterBContext.current myCounterB;

  myCounterAContext({
myCounterA = 1;

myCounterBContext({
  myCounterB = 2;
  assert(myCounterA == 1);
  myCounterA = 3;
});

assert(myCounterA == 3);
  });

}


Re: Using "cast(enum)" for explicit request of ctfe

2013-12-04 Thread luka8088
On 4.12.2013. 16:28, monarch_dodra wrote:
> On Wednesday, 4 December 2013 at 13:45:35 UTC, Jakob Ovrum wrote:
>> On Wednesday, 4 December 2013 at 13:16:35 UTC, monarch_dodra wrote:
>>> Problem is that doing this returns an immutable type, which isn't
>>> quite the same as a ctfe variable (which was the initial goal, as far
>>> as I'm concerned)
>>
>> Immutable "global" variables with initializers are readable at
>> compile-time (as the initializers are required to be readable at
>> compile-time). I don't know what you mean by "CTFE variable".
> 
> I mean that:
> 
> auto a = eval!(1 + 2);
> a += 1; // <= cannot modify immutable expression 3
> 
> Should work.

I don't think so. With type inference a is immutable.

int a = eval!(1 + 2);

Works.

> 
> Or that:
> 
> auto arr = eval!(iota(0, 10).array());
> //arr is expected to be int[]
> //NOT immutable(int[])
> arr[0] = 10; //Error: cannot modify immutable expression arr[0]
> 
> Long story short, just because something is pre-calculated with CTFE
> doesn't mean it can't be mutable.
> 
> For example:
> enum a10 = iota(0, 10).array();
> auto arr1 = a10;
> auto arr2 = a10;
> assert(arr1 !is arr2);
> assert(arr1 == arr2);
> arr1[0] = 10;
> assert(arr1 != arr2);
> 
> 
>> Using enum has issues, as elaborated upon by Don in this enhancement
>> request[1] (for reference; I'm sure you remember them).
>>
>> [1] http://d.puremagic.com/issues/show_bug.cgi?id=10950
> 
> I absolutely remember that. But I still believe that is a *bug*, and not
> an enhancement request. Buggy behavior should not dictate our design.



Re: Using "cast(enum)" for explicit request of ctfe

2013-12-04 Thread luka8088
On 4.12.2013. 13:08, Jakob Ovrum wrote:
> On Wednesday, 4 December 2013 at 11:54:08 UTC, luka8088 wrote:
>> Eval comes from examples of http://dlang.org/function.html#interpretation
> 
> A couple of notes:
> 
> The use of a variadic template parameter instead of an alias parameter
> is misleading because the template does not need to support types in the
> first place (and the proposed implementation would fail when given a type).

Yeah. I guess it is a documentation (example) issue.

> 
> The name used on dlang.org is correctly using a lowercase `e`, according
> to the naming convention for templates that always evaluate to
> values/variables as opposed to types.

Oh, I didn't know that. Thanks!



Re: Using "cast(enum)" for explicit request of ctfe

2013-12-04 Thread luka8088
On 4.12.2013. 12:41, monarch_dodra wrote:
> On Wednesday, 4 December 2013 at 11:32:40 UTC, Jakob Ovrum wrote:
>> On Wednesday, 4 December 2013 at 11:12:54 UTC, monarch_dodra wrote:
>>> Or, if somebody has an idea of how to do this via a library solution?
>>
>> alias eval(alias exp) = exp;
> 
> Nice :D
> 
> Very very nice. Though that should be "enum" I think.
> 
> I think having this somewhere in Phobos would be a great addition.
> 
> Not sure "eval" would be correct though, as "eval" tends to imply parsing.
> 
> I'll just file a simple ER then. Where would we put such an addition?

Eval comes from examples of http://dlang.org/function.html#interpretation



Re: Using "cast(enum)" for explicit request of ctfe

2013-12-04 Thread luka8088
On 4.12.2013. 12:12, monarch_dodra wrote:
> I love D's ctfe capabilities. They allow using complex values, with no
> run-time cost, and at a very low "code" cost.
> 
> One thing that does kind of get on my nerves is how you *always* have to
> declare an actual enum to do that. You can't do CTFE on the fly.
> 
> This ranges from mildly annoying, typically:
> 
> //Declare a CTFE message
> enum message = format("Some message: %s", some_static_args);
> //Use the CTFE message
> enforce(pointer, message);
> 
> Or, also,
> enum fib5 = fib(5); //CTFE calculate
> writeln(fib5); //use
> 
> To, sometimes, downright *impossible*. If you ever need to do CTFE
> inside the body of a static foreach, dmd will block you due to
> "redeclaration":
> 
> foreach(T; Types)
> {
> enum message = format("This type is %s.", T.stringof); //Error!
> Redaclaration
> writeln(message);
> }
> 
> Fixing this one requies an external template that will create your enum
> on the fly.
> 
> 
> 
> I'm thinking: While this is all surmountable, I'm pretty sure the
> language could give us a easier time of this. We have the possibility to
> declare and call a lambda both in one line. Why not be able to declare a
> ctfe value as a 1-liner too?
> 
> I'm thinking, a simple cast: A cast to the "enum" type, which explicitly
> means "this value needs to be compile time known". Usage would look like:
> 
> enforce(pointer, cast(enum)format("Some message: %s", some_static_args));
> writeln(cast(enum)fib(5));
> foreach(T; Types)
> writeln(cast(enum)format("This type is %s.", T.stringof));
> 
> Here, we have some very simple code, no redundant variables, and no
> run-time overhead.
> 
> I'm just throwing this out there to get some quick feedback before
> filling an ER, or maybe a DIP.
> 
> Or, if somebody has an idea of how to do this via a library solution?
> 
> Thoughts?

You can use Eval template:

template Eval (alias value) {
  alias value Eval;
}

writeln(Eval!(1 + 1));

enforce(pointer, Eval!(format("Some message: %s", some_static_args)));

Does this answer your question?



Re: mutexes (mutices?) and TLS

2013-12-03 Thread luka8088
On 2.12.2013. 17:44, Torje Digernes wrote:
> On Monday, 2 December 2013 at 12:09:34 UTC, Artem Tarasov wrote:
>> Yes, global variables are thread-local by default. Use shared or _gshared
>> qualifier.
>> I guess such questions belong to D.learn.
> 
> Is this really desired behaviour for mutexes? Since mutexes (per my
> rather little experience) is mostly used for locking between threads,
> which is not doable without extra qualifiers now.
> 
> I know that global variables are thread local, but using the mutex in
> different threads, which seems to be their main usage, require extra
> qualifiers. Shouldn't the main usage be possible using default setup, as
> in no extra qualifiers?

Take into consideration that shared (
http://dlang.org/migrate-to-shared.html ) has not been fully implemented
nor fully documented yet.

For more information check out:
http://forum.dlang.org/thread/k7orpj$1tt5$1...@digitalmars.com



Re: Duplicate keys in array literals?

2013-11-28 Thread luka8088
On 28.11.2013. 21:01, Jacob Carlborg wrote:
> On 2013-11-28 18:55, luka8088 wrote:
> 
>> PHP also allows it:
>>
>> $data = array('a' => 1, 'a' => 2);
>>
>> And I find it to be only a source of bugs.
> 
> Arrays are a weird beast in PHP. They're both arrays and associative
> arrays, at the same time, somehow.
> 

They are associative array which can emulate array like behavior. But
unfortunately that has also in my experience turned out to be just
another source of bugs.



Re: Duplicate keys in array literals?

2013-11-28 Thread luka8088
On 28.11.2013. 12:23, bearophile wrote:
> Is it a good idea to silently statically accept duplicated keys in both
> dynamic array literals and in associative array literals?
> 
> 
> void main() {
> int[] a = [0:10, 0:20];
> int[int] aa = [0:10, 0:20];
> }
> 
> 
> I don't remember having ever had the need for this, and on the other
> hand I have had some mistakes like this in my D code not caught
> statically by the compiler.
> 
> ---
> 
> Note that in this post I am not discussing about inserting multiple
> times a key-pair in an associative array, this is normal and useful:
> 
> void main() {
> int[int] aa;
> aa[0] = 10;
> aa[0] = 20;
> }
> 
> Bye,
> bearophile

PHP also allows it:

$data = array('a' => 1, 'a' => 2);

And I find it to be only a source of bugs.



Re: DIP 50 - AST macros

2013-11-23 Thread luka8088
On 23.11.2013. 11:40, Jacob Carlborg wrote:
> On Friday, 22 November 2013 at 23:43:26 UTC, luka8088 wrote:
> 
> Then q{ } or <[ ]> would be very limited. Not even templets need to
> contain semantically valid code, as long as it's not instantiated.
> Usually the idea with AST macros is to take code that is not valid
> (semantically) and create a meaning for it (make it valid). If it's
> already valid in the first place why use AST macros at all? Just use the
> code as is.
> 
> Hmm, it could be useful to only allow semantically valid code and use
> macros to add new things to the code. But again, very limiting.
> 

I don't think so. What I was proposing is to split the problem into
sub-problems and then find the best method for each one. Templates are
one of them and they solve the problem of not so pretty string
concatenation and that is way they should contain only valid D code.

I don't think that writing a non valid code and then making in valid in
the compilation process should be allowed. Only transforming one valid
code into another valid code but hygienically. I don't have a concrete
example why that is a bad idea, I can only speak from experience. So
this is only my opinion, not an argument.

>> Currently mixin() only accepts a string that contains a valid D code.
> 
> The AST returned from a macro need to be valid D code as well.
> 
>> So if you have a custom parser for your DSL you need to generate a D code
>> from that DSL and then pass it to the compiler using mixin() in order
>> for the compiler to parse it again. Double parsing could be skipped if
>> mixin() would accept already built D AST and on the other hand parsing
>> DSL and building D AST yourself would allow maximum flexibility.
> 
> You can already do that today with string mixins, although you would
> need to convert the AST back to a string first.
> 

Yeah. I was referring to
http://forum.dlang.org/post/l5vcct$2lit$1...@digitalmars.com

So for example, the problem from referred post could be addressed by
defining as AST struct type that is accessible to user and making
mixin() accept both string and AST struct. That is also why I wrote
http://forum.dlang.org/post/l6n91e$29hd$1...@digitalmars.com


It seems to me now that we don't understand each other so well :) Maybe
we should put in more examples?


Re: DIP 50 - AST macros

2013-11-22 Thread luka8088
On 22.11.2013. 11:19, Jacob Carlborg wrote:
> On 2013-11-22 10:44, luka8088 wrote:
> 
>> What we currently have:
> 
> I should have been t{ }. I do understand the difference between t{ } and
> q{ }. But not between <[ ]> and t{ }. Is it just two different syntaxes
> for the same?
> 

Yes. t{ } and <[ ]> are the same.


Re: DIP 50 - AST macros

2013-11-22 Thread luka8088
On 22.11.2013. 11:17, Jacob Carlborg wrote:
> On 2013-11-22 10:27, luka8088 wrote:
> 
>> Um, my it's suppose to be the same as <[ ... ]> but I liked t{ ... }
>> syntax better as it looked more consistent with what D already has. But
>> I should have used <[ ... ]> , my mistake sorry.
> 
> I thought you argued that the t{ } need to contain semantically valid code?
> 

Yes. I still do. And I think that <[ ... ]> should contain semantically
valid code. In my opinion, if you wish to be able to write anything
else, the way to go would be: write it using q{ ... }, parse it
yourself, build a valid D AST yourself, give D AST to the compiler using
mixin().

Currently mixin() only accepts a string that contains a valid D code. So
if you have a custom parser for your DSL you need to generate a D code
from that DSL and then pass it to the compiler using mixin() in order
for the compiler to parse it again. Double parsing could be skipped if
mixin() would accept already built D AST and on the other hand parsing
DSL and building D AST yourself would allow maximum flexibility.


Re: DIP 50 - AST macros

2013-11-22 Thread luka8088
On 21.11.2013. 8:28, Jacob Carlborg wrote:
> On 2013-11-20 23:25, luka8088 wrote:
>>
>> The point of this example is that mixin() accepts Ast instead of string.
>> That way, we parse our DSL to D Ast and give it straight to the compiler
>> and everything is done only once!
> 
> So how is this different except for my proposal, except the use of q{}
> and "mixin"?
> 

What we currently have:


custom syntax ---> custom parser ---> custom code generator ---> D
syntax (as string)


D syntax (as string) ---> mixin ---
   \
D syntax (read by compiler) ---> D AST (held by the compiler,
unreachable) ---> compilation to binary (unreachable)


The way I see it we should allow to following:


custom syntax ---> custom parser -
  \
D syntax inside <[ ... ]> template ---> D AST (as struct)


D AST (as struct) ---> mixin ---
\
D syntax (read by compiler) > D AST (held by compiler)


D AST (held by compiler, reachable) ---> reflection ---> D AST (as struct)

D AST (held by compiler) ---> compilation to binary



Re: DIP 50 - AST macros

2013-11-22 Thread luka8088
On 21.11.2013. 9:31, Jacob Carlborg wrote:
> On 2013-11-21 09:01, luka8088 wrote:
> 
>> When using q{} compiler treats the contents as a regular string, and you
>> have to parse it and give it to the compiler using "mixin". So basically
>> you can say to the compiler: this part of code is my DSL and I will
>> parse it and check the semantics instead of you, then when I am done I
>> will give you D AST represantation of it.
>>
>> The interpretation of the content of q{} is up to you, where in other
>> proposals compiler builds AST based on, I would say, guessing.
>>
>> Take a look at this last example. And lets say that it is from python.
>> In python dont2() belong inside else block. So how else would you make
>> sure that compiler behaves accordingly?
> 
> Sorry, I meant your original suggestion of using t{}.
> 

Um, my it's suppose to be the same as <[ ... ]> but I liked t{ ... }
syntax better as it looked more consistent with what D already has. But
I should have used <[ ... ]> , my mistake sorry.



Re: DIP 50 - AST macros

2013-11-21 Thread luka8088
On 21.11.2013. 8:28, Jacob Carlborg wrote:
> On 2013-11-20 23:25, luka8088 wrote:
> 
>> If I understood you correctly, the issue with current way DSLs are
>> implemented is that code needs to be parsed two times. First time DSL
>> author parses it and creates D code from it, and second time D compiler
>> parses that D code and compiles it. What I would suggest in this case is
>> that instead of intercepting the compiler and "fixing" the semantics
>> before it is verified we allow the user to build D Ast and give it to
>> the compiler. That is why I used the mixin in my example.
>>
>> Ast dCodeAst = dslToD!(q{
>>if true
>>  do()
>>else
>>  dont()
>>  dont2()
>>whatever()
>> });
>>
>> // manipulate dCodeAst
>>
>> mixin(dCodeAst);
>>
>> The point of this example is that mixin() accepts Ast instead of string.
>> That way, we parse our DSL to D Ast and give it straight to the compiler
>> and everything is done only once!
> 
> So how is this different except for my proposal, except the use of q{}
> and "mixin"?
> 

When using q{} compiler treats the contents as a regular string, and you
have to parse it and give it to the compiler using "mixin". So basically
you can say to the compiler: this part of code is my DSL and I will
parse it and check the semantics instead of you, then when I am done I
will give you D AST represantation of it.

The interpretation of the content of q{} is up to you, where in other
proposals compiler builds AST based on, I would say, guessing.

Take a look at this last example. And lets say that it is from python.
In python dont2() belong inside else block. So how else would you make
sure that compiler behaves accordingly?


Re: @property (again)

2013-11-20 Thread luka8088
On 21.11.2013. 4:14, Manu wrote:
> It would be nice to have a commitment on @property.
> Currently, () is optional on all functions, and @property means nothing.
> I personally think () should not be optional, and @property should
> require that () is not present (ie, @property has meaning).
> 
> This is annoying:
>   alias F = function();
> 
>   @property F myProperty() { return f; }
> 
>   Then we have this confusing situation:
> myProperty(); // am I calling the property, or am I calling the
> function the property returns?
> 
> This comes up all the time, and it really grates my nerves.
> Suggest; remove @property, or make it do what it's supposed to do.

Fix it!

struct S {
  auto f1 () { return 1; }
  auto f2 () { return { return 2; }; }

  @property auto p1 () { return 5; }
  @property auto p2 () { return { return 6; }; }
}

unittest {

  S s1;

  // don't breat current function call behavior
  // don't deal with optional () now
  // (but also don't break their behavior)
  assert(s1.f1 == 1);
  assert(s1.f1() == 1);

  assert(s1.f2()() == 2);
  auto fv = s1.f2;
  assert(fv() == 2);

  // make sure @propery works as described
  // in http://dlang.org/property.html#classproperties
  assert(s1.p1 == 5);
  static assert(!__traits(compiles, s1.p1() == 5));

  assert(s1.p2() == 6);
  static assert(!__traits(compiles, s1.p2()() == 6));

  auto pv1 = s1.p1;
  assert(pv1 == 5);

  auto pv2 = s1.p2();
  assert(pv2 == 6);

}

This test is according to the documentation and as far as I remember
everyone agrees that this is how @property should behave.


Re: @property (again)

2013-11-20 Thread luka8088
On 21.11.2013. 7:06, Walter Bright wrote:
> On 11/20/2013 7:14 PM, Manu wrote:
>> It would be nice to have a commitment on @property.
>> Currently, () is optional on all functions, and @property means nothing.
>> I personally think () should not be optional, and @property should
>> require that
>> () is not present (ie, @property has meaning).
> 
> The next release is going to be about bug fixes, not introducing
> regressions from new features(!). It's a short release cycle, anyway.
> 

How is this not a but?

It sure does not behave the same as described in
http://dlang.org/property.html#classproperties .

And what everyone wants (and agrees on) is that it should behave like it
is described in the documentation!

How does that not qualify as a bug!?


Re: @property (again)

2013-11-20 Thread luka8088
On 21.11.2013. 6:59, Jesse Phillips wrote:
> On Thursday, 21 November 2013 at 03:37:19 UTC, Manu wrote:
>> On 21 November 2013 13:27, Adam D. Ruppe 
>> wrote:
>>
>>> On Thursday, 21 November 2013 at 03:14:30 UTC, Manu wrote:
>>>
 I personally think () should not be optional

>>>
>>> No.
>>
> 
> I'm going to reiterate Adam's final statement. We need to fix the
> second case you claim, shut up about the optional parens so it
> does get fixed an we stop preventing the one thing from getting
> fixed because of the other.

Amen to that.



Re: DIP 50 - AST macros

2013-11-20 Thread luka8088
On 20.11.2013. 9:04, Jacob Carlborg wrote:
> On 2013-11-19 21:54, luka8088 wrote:
> 
>> Well, do think about that :)
>>
>> auto f = e => e.name == "John";
>> if (true)
>>f = e => e.name == "Jack";
>>
>> auto person = Person.where(f);
>>
>> I can think of a many use cases where conditional query generation is
>> required. But I don't see how this could be done using AST macros.
> 
> Using the Rails plugin, which I've got this idea from, I would not make
> the lambda conditional but the whole statement, translated to D:
> 
> auto person = Person.scoped();
> 
> if (true)
> person = person.where(e => e.name == "Jack");
> 
> else
> person = person.where(e => e.name == "John");

Ok, I see. Yes, I tried to think of a case where this could not work but
I was unable to find one.

> 
>> Um, sorry. I don't understand the question.
>>
>> This example (and suggestion) was suppose to show that we could allow
>> AST mixins as well as string mixins. It should behave like string mixins
>> but the main difference is that AST is structured so it much cleaner to
>> manipulate.
> 
> What I was trying to say is, what don't you like about my suggestion. Is
> it that the "mixin" keyword isn't used.
> 

If I understood you correctly, the issue with current way DSLs are
implemented is that code needs to be parsed two times. First time DSL
author parses it and creates D code from it, and second time D compiler
parses that D code and compiles it. What I would suggest in this case is
that instead of intercepting the compiler and "fixing" the semantics
before it is verified we allow the user to build D Ast and give it to
the compiler. That is why I used the mixin in my example.

Ast dCodeAst = dslToD!(q{
  if true
do()
  else
dont()
dont2()
  whatever()
});

// manipulate dCodeAst

mixin(dCodeAst);

The point of this example is that mixin() accepts Ast instead of string.
That way, we parse our DSL to D Ast and give it straight to the compiler
and everything is done only once!


Re: DIP 50 - AST macros

2013-11-19 Thread luka8088
On 19.11.2013. 21:32, Jacob Carlborg wrote:
> On 2013-11-19 19:32, luka8088 wrote:
> 
>> Oh, I see. It seems that I indeed missed the point.
>>
>> It seems to me that this DIP could to be granulated into: AST
>> reflection, AST manipulation and AST template. The reason for this, as
>> far as I see, is that although they overlap in some cases, in other they
>> could be used independently and it could help with understanding.
>>
>> Regarding AST reflection and AST manipulation, I have been thinking and
>> found a few situations that bugs me:
>>
>> In the example of
>>
>> auto person = Person.where(e => e.name == "John");
>>
>> what would be the response of:
>>
>> auto f = e => e.name == "John";
>> auto person = Person.where(f);
>>
>> I guess it should be a compiler error because f could be modified at
>> runtime and f's body could be hidden.
> 
> I haven't thought about that. But AST macros work at compile time

Well, do think about that :)

auto f = e => e.name == "John";
if (true)
  f = e => e.name == "Jack";

auto person = Person.where(f);

I can think of a many use cases where conditional query generation is
required. But I don't see how this could be done using AST macros.

> 
>> So basically AST macros are
>> something that look like D but actually are not D. This seems to me like
>> an example that look good as a toy example but fails on the larger scale
>> so I agree with Water in this matter (unless better
>> examples/explanations are provided). Or maybe I am not seeing it clear
>> enough.
>>
>> But! Regarding AST templates, I think something like the following would
>> be a great syntax sugar:
>>
>> int i = 5;
>>
>> Ast a = t{ // t as template
>>int j = $i;
>>int k = $i + $i;
>> };
>>
>> // ... alter ast here if necessary
>>
>> mixin(a);
> 
> So you don't like that it's not a "mixin" there with AST macros?
Um, sorry. I don't understand the question.

This example (and suggestion) was suppose to show that we could allow
AST mixins as well as string mixins. It should behave like string mixins
but the main difference is that AST is structured so it much cleaner to
manipulate.

> 
>> Where syntax (and semantic) in template must be strictly D. The only
>> difference would be $ sign which would allow reference to symbols
>> outside the template.
> 
> If the semantics need to be valid this is very limited. Even more
> limiting than what we have now with string mixins. Note that with AST
> macros, the input needs to syntactically valid. The resulting AST of the
> macro needs to both be syntactically and semantically valid D code.
> 

When I first started programing I was introduced to static typing. Then
I discovered dynamic typing and dynamic structures (in php mostly). And
for me at that time it was much better and easier to use. However, over
time I learned that sometimes limitations are good and now like static
typing much more :) My point here is that limits are sometimes good,
although that don't seem that way.

I think that we should think of a more complex (real world) examples and
than real issues will reveal themselves.


Re: DIP 50 - AST macros

2013-11-19 Thread luka8088
On 18.11.2013. 6:05, Walter Bright wrote:
> On 11/17/2013 7:14 PM, deadalnix wrote:
> 
> Ok, then I'm not seeing what AST macros do that lazy parameters /
> template overloading / mixin templates do not?


Well, this is just a small piece of the puzzle but I would like to be
able to write (as syntax sugar):

query q{
  select data;
}

a compiler to rewrite it to:

mixin(query!(q{
  select data;
}));

if the following exists:

mixin statement query (string code) {
  // ...
}

Maybe I am the only one but I found that string mixins + code generation
are used this way even i Phobos and I find the first syntax much more
cleaner.

Or this is not in the general interest?


Re: DIP 50 - AST macros

2013-11-19 Thread luka8088
On 13.11.2013. 13:47, Jacob Carlborg wrote:
> On 2013-11-13 09:15, luka8088 wrote:
>> I think that string concatenation is enough (at least for now), and if
>> you want another syntax for templates you can write a macro for that.
> 
> Strings are far from enough. Then you have missed the whole idea. It's
> not supposed to be syntax sugar for string mixins.
> 

Oh, I see. It seems that I indeed missed the point.

It seems to me that this DIP could to be granulated into: AST
reflection, AST manipulation and AST template. The reason for this, as
far as I see, is that although they overlap in some cases, in other they
could be used independently and it could help with understanding.

Regarding AST reflection and AST manipulation, I have been thinking and
found a few situations that bugs me:

In the example of

auto person = Person.where(e => e.name == "John");

what would be the response of:

auto f = e => e.name == "John";
auto person = Person.where(f);

I guess it should be a compiler error because f could be modified at
runtime and f's body could be hidden. So basically AST macros are
something that look like D but actually are not D. This seems to me like
an example that look good as a toy example but fails on the larger scale
so I agree with Water in this matter (unless better
examples/explanations are provided). Or maybe I am not seeing it clear
enough.

But! Regarding AST templates, I think something like the following would
be a great syntax sugar:

int i = 5;

Ast a = t{ // t as template
  int j = $i;
  int k = $i + $i;
};

// ... alter ast here if necessary

mixin(a);

Where syntax (and semantic) in template must be strictly D. The only
difference would be $ sign which would allow reference to symbols
outside the template.

So if you would want to have conditional template you could write:

int i = 5;
int j;
bool b = false;

mixin(t{
  $j = $i;
  static if ($b) {
$j++;
  }
});


Re: Build Master: Scheduling

2013-11-15 Thread luka8088
On 15.11.2013. 11:01, Jacob Carlborg wrote:
> On 2013-11-15 10:16, luka8088 wrote:
> 
>> Yes. For example, if you have version 0.1, 0.2 and 0.3. And you find and
>> fix a bug in 0.3 but you still wish to support backport for 0.2 and 0.1
>> that you indeed need to make 3 releases. 0.1.1, 0.2.1 and 0.3.1.
> 
> There's a difference in still supporting old releases and working on
> five different releases at the same time that hasn't been released at all.

Yeah. I agree. Bad idea.

> 
>> But then again having LTS that others have mentioned seems better. So
>> that only each nth release has 2.x.1, 2.x.2, 2.x.3.
>>
>>  From my perspective, not separating releases with improvements + bug
>> fixes from releases with only bug fixes is an issue. Because every new
>> improvement implies risk of new bugs and some users just want to have
>> one version that is as stable as possible.
>>
>> What do you all think about http://semver.org/ ?
>> We use this king of versioning notation at work and it turns out to be
>> very good.
> 
> I like it but I'm not sure who it's applied to applications. It's clear
> to see how it works for libraries but not for applications. I mean, what
> is considered an API change for an application? Changing the command
> line flags?
> 

I think API change could be analog to features change (and the way they
are interfaced).

So the version consists of x.y.z

z increments only on bug fixes.
y increments when new features are added, but they are backwards
compatable. Incrementing y resets z to 0.
x increments when backwards incompatible change are made. Incrementing x
resets y and z to 0.



Re: Build Master: Scheduling

2013-11-15 Thread luka8088
On 15.11.2013. 0:22, Xavier Bigand wrote:
> Le 14/11/2013 09:39, luka8088 a écrit :
>> On 14.11.2013. 5:29, Tyro[17] wrote:
>>> On 11/13/13, 11:06 PM, Brad Roberts wrote:
>>>> On 11/13/13 7:13 PM, Tyro[17] wrote:
>>>>> On 11/13/13, 9:46 PM, Brad Roberts wrote:
>>>>>> On 11/13/13 4:37 PM, Tyro[17] wrote:
>>>>>>> I'm of the opinion, however, that
>>>>>>> the cycle should be six months long. This particular schedule is not
>>>>>>> of my own crafting but I
>>>>>>> believe it to be sound and worthy of emulation:
>>>>>>
>>>>>> I think 6 months between releases is entirely too long.  I'd really
>>>>>> like
>>>>>> us to be back closer to the once every month or two rather than only
>>>>>> twice a year.  The pace of change is high and increasing (which is a
>>>>>> good thing).  Release early and often yields a smoother rate of
>>>>>> introducing those changes to the non-bleeding-edge part of the
>>>>>> community.  The larger the set of changes landing in a release the
>>>>>> more
>>>>>> likely it is to be a painful, breaking, experience.
>>>>>
>>>>> Surely for those of us that live on the edge, it is fun to be able to
>>>>> use the latest and greatest.
>>>>> Hence the reason for monthly release of betas. Within a month
>>>>> (sometimes shorter) of any new feature
>>>>> being implemented in the language, you'll be able to download the
>>>>> binaries for your favorite distro
>>>>> and begin testing it.
>>>>>
>>>>> The side effect is that there is more time to flesh out a particular
>>>>> implementation and get it
>>>>> corrected prior to it being an irreversible eyesore in the language.
>>>>> You also have a greater play in
>>>>> the filing bug reports as to aid in minimizing the number of bugs that
>>>>> make it into the final release.
>>>>>
>>>>> Unlike us adventurers however, companies require a more stable
>>>>> environment to work in. As such, the
>>>>> six month cycle provides a dependable time frame in which they can
>>>>> expect to see only bugfixes in to
>>>>> the current release in use.
>>>>>
>>>>> I think this release plan is a reasonable compromise for both parties.
>>>>
>>>> Companies that don't want frequent changes can just skip releases,
>>>> using
>>>> whatever update frequency meets their desires.  Companies do this
>>>> already all the time.  That only issue there is how long we continue to
>>>> backport fixes into past releases.  So far we've done very minimal
>>>> backporting.
>>>>
>>>>
>>>
>>> And what I am proposing is that we start backporting to stable releases
>>> and with subsequent bugfix releases.
>>>
>>> I'm also suggesting that for people interested in a more frequent
>>> release will have at least five, if not more, such releases (betas)
>>> prior to the official release. Live on the edge... use the beta. That's
>>> what we do now.
>>>
>>> At the moment there's nothing that make dmd.2.064.2 any more bug free
>>> than its previously released counterparts. Very few people participated
>>> in the review of the betas which were released arbitrarily (when the
>>> time seemed right). We simply call on of those betas dmd.2.064.2 and
>>> moved on. It still has a slew of bugs and more are discovered daily as
>>> people start to finally use the so called  "release".
>>>
>>> I'm saying we are go about it a little different. We get more people
>>> involved in the testing process by providing more frequent release of
>>> betas and getting much of the bugs identified fixed before designating a
>>> release. To me you get what your are after a faster turnaround on fixes
>>> (monthly). And the broader customer base gets a more stable product with
>>> less bugs.
>>>
>>
>> Just a wild thought...
>>
>> Maybe we can have monthly release and still keep it stable. Imagine this
>> kind of release schedule:
>>
>> Month #  11  12  1   2   3
>>
>>   2.064   2.065   2.066   2.067   

Re: Build Master: Scheduling

2013-11-15 Thread luka8088
On 14.11.2013. 10:55, Jacob Carlborg wrote:
> On 2013-11-14 09:39, luka8088 wrote:
> 
>> Just a wild thought...
>>
>> Maybe we can have monthly release and still keep it stable. Imagine this
>> kind of release schedule:
>>
>> Month #  11  12  1   2   3
>>
>>   2.064   2.065   2.066   2.067   2.068
>>   2.065rc22.066rc22.067rc22.068rc22.069rc2
>>   2.066rc12.067rc12.068rc12.069rc12.070rc1
>>   2.067b2 2.068b2 2.069b2 2.070b2 2.071b2
>>   2.068b1 2.069b1 2.070b1 2.071b1 2.072b1
>>   2.069alpha  2.070alpha  2.071alpha  2.072alpha  2.073alpha
>>
>> Where new features are only added to alpha release. And bug fixes are
>> added to all releases.
>>
>> This way new bug fixes and new features would be released every month
>> but there would be a 5 month delay between the time that feature A is
>> added to the alpha release and the time feature A is propagated to the
>> stable release. But during this period feature A would be propagated
>> through releases and there would be plenty of opportunity to test it and
>> clear it of bugs. I am not a fan of such delay but I don't see any other
>> way new features could be added without higher risk of bugs.
>>
>> Also vote up for daily snapshots.
> 
> Are you saying we should have six releases going on at the same time?
> 

Yes. For example, if you have version 0.1, 0.2 and 0.3. And you find and
fix a bug in 0.3 but you still wish to support backport for 0.2 and 0.1
that you indeed need to make 3 releases. 0.1.1, 0.2.1 and 0.3.1.

But then again having LTS that others have mentioned seems better. So
that only each nth release has 2.x.1, 2.x.2, 2.x.3.

>From my perspective, not separating releases with improvements + bug
fixes from releases with only bug fixes is an issue. Because every new
improvement implies risk of new bugs and some users just want to have
one version that is as stable as possible.

What do you all think about http://semver.org/ ?
We use this king of versioning notation at work and it turns out to be
very good.


Re: Build Master: Scheduling

2013-11-14 Thread luka8088
On 14.11.2013. 5:29, Tyro[17] wrote:
> On 11/13/13, 11:06 PM, Brad Roberts wrote:
>> On 11/13/13 7:13 PM, Tyro[17] wrote:
>>> On 11/13/13, 9:46 PM, Brad Roberts wrote:
 On 11/13/13 4:37 PM, Tyro[17] wrote:
> I'm of the opinion, however, that
> the cycle should be six months long. This particular schedule is not
> of my own crafting but I
> believe it to be sound and worthy of emulation:

 I think 6 months between releases is entirely too long.  I'd really
 like
 us to be back closer to the once every month or two rather than only
 twice a year.  The pace of change is high and increasing (which is a
 good thing).  Release early and often yields a smoother rate of
 introducing those changes to the non-bleeding-edge part of the
 community.  The larger the set of changes landing in a release the more
 likely it is to be a painful, breaking, experience.
>>>
>>> Surely for those of us that live on the edge, it is fun to be able to
>>> use the latest and greatest.
>>> Hence the reason for monthly release of betas. Within a month
>>> (sometimes shorter) of any new feature
>>> being implemented in the language, you'll be able to download the
>>> binaries for your favorite distro
>>> and begin testing it.
>>>
>>> The side effect is that there is more time to flesh out a particular
>>> implementation and get it
>>> corrected prior to it being an irreversible eyesore in the language.
>>> You also have a greater play in
>>> the filing bug reports as to aid in minimizing the number of bugs that
>>> make it into the final release.
>>>
>>> Unlike us adventurers however, companies require a more stable
>>> environment to work in. As such, the
>>> six month cycle provides a dependable time frame in which they can
>>> expect to see only bugfixes in to
>>> the current release in use.
>>>
>>> I think this release plan is a reasonable compromise for both parties.
>>
>> Companies that don't want frequent changes can just skip releases, using
>> whatever update frequency meets their desires.  Companies do this
>> already all the time.  That only issue there is how long we continue to
>> backport fixes into past releases.  So far we've done very minimal
>> backporting.
>>
>>
> 
> And what I am proposing is that we start backporting to stable releases
> and with subsequent bugfix releases.
> 
> I'm also suggesting that for people interested in a more frequent
> release will have at least five, if not more, such releases (betas)
> prior to the official release. Live on the edge... use the beta. That's
> what we do now.
> 
> At the moment there's nothing that make dmd.2.064.2 any more bug free
> than its previously released counterparts. Very few people participated
> in the review of the betas which were released arbitrarily (when the
> time seemed right). We simply call on of those betas dmd.2.064.2 and
> moved on. It still has a slew of bugs and more are discovered daily as
> people start to finally use the so called  "release".
> 
> I'm saying we are go about it a little different. We get more people
> involved in the testing process by providing more frequent release of
> betas and getting much of the bugs identified fixed before designating a
> release. To me you get what your are after a faster turnaround on fixes
> (monthly). And the broader customer base gets a more stable product with
> less bugs.
> 

Just a wild thought...

Maybe we can have monthly release and still keep it stable. Imagine this
kind of release schedule:

Month #  11  12  1   2   3

 2.064   2.065   2.066   2.067   2.068
 2.065rc22.066rc22.067rc22.068rc22.069rc2
 2.066rc12.067rc12.068rc12.069rc12.070rc1
 2.067b2 2.068b2 2.069b2 2.070b2 2.071b2
 2.068b1 2.069b1 2.070b1 2.071b1 2.072b1
 2.069alpha  2.070alpha  2.071alpha  2.072alpha  2.073alpha

Where new features are only added to alpha release. And bug fixes are
added to all releases.

This way new bug fixes and new features would be released every month
but there would be a 5 month delay between the time that feature A is
added to the alpha release and the time feature A is propagated to the
stable release. But during this period feature A would be propagated
through releases and there would be plenty of opportunity to test it and
clear it of bugs. I am not a fan of such delay but I don't see any other
way new features could be added without higher risk of bugs.

Also vote up for daily snapshots.


Re: DIP 50 - AST macros

2013-11-13 Thread luka8088
On 13.11.2013. 13:26, Jacob Carlborg wrote:
> On 2013-11-13 09:34, luka8088 wrote:
> 
>> What about something like this?
>>
>> class Person {
>>
>>macro where (Context context, Statement statement) {
>>  // ...
>>}
>>
>> }
>>
>> auto foo = "John";
>> auto result = Person.where(e => e.name == foo);
>>
>> // is replaced by
>> auto foo = "John";
>> auto result = Person.query("select * from person where person.name = " ~
>> sqlQuote(foo) ~ ";");
> 
> That's basically what would happen.
> 

May/Should I add such example to wiki?

--
Luka


Re: DIP 50 - AST macros

2013-11-13 Thread luka8088
On 13.11.2013. 9:26, Jacob Carlborg wrote:
> On 2013-11-12 17:14, John Colvin wrote:
> 
>> oh, I see. Would AST macros really be enough to make this work in D?
>> "Arbitrary code" is a huge feature space in D, including much that
>> doesn't map well to anything outside of a relatively low-level language,
>> let alone SQL.
>> I can see it quickly becoming a nightmare that would be worse than just
>> issuing the predicate as an sql string or some generic equivalent.
> 
> Person.where(e => e.name == "John")
> 
> I'm thinking that we only need to convert the part that is prefixed
> with, in this example, "e". Any other code should be executed in the
> context of the caller. It should be possible to do this as well:
> 
> auto foo = "John";
> auto result = Person.where(e => e.name == foo);
> 
> Which will result in the same SQL query.
> 
> I'm using a pluign to Ruby on Rails that does something similar but by
> overloading operators. The problem with this approach, in Ruby, is that
> you cannot overload operators like || and &&, so instead they overload |
> and & resulting in new problems like operator precedence. Example:
> 
> Person.where{ |e| (e.name == "John") & (e.address == "Main street") }
> 

What about something like this?

class Person {

  macro where (Context context, Statement statement) {
// ...
  }

}

auto foo = "John";
auto result = Person.where(e => e.name == foo);

// is replaced by
auto foo = "John";
auto result = Person.query("select * from person where person.name = " ~
sqlQuote(foo) ~ ";");


Re: DIP 50 - AST macros

2013-11-13 Thread luka8088
On 10.11.2013. 22:20, Jacob Carlborg wrote:
> I've been thinking quite long of how AST macros could look like in D.
> I've been posting my vision of AST macros here in the newsgroup a couple
> of times already. I've now been asked to create a DIP out of it, so here
> it is:
> 
> http://wiki.dlang.org/DIP50
> 

I took a look at it as here is my conclusion for now:

Statement and attribute macro examples look great. But I don't like Linq
example. I don't think code like the following should be allowed.

query {
  from element in array
  where element > 2
  add element to data
}


>From my point of view this whole idea is great as it makes it easier
what is already possible. For example, with current behavior if I wanted
to write.

foo {
  writeln("foo");
  writeln("foo again");
}

I would have to write:

mixin(foo!(q{
  writeln("foo");
  writeln("foo again");
}));

So the proposed behavior looks much nicer, and I agree with it as the
content of foo block is actually written in D and I think whoever is
reading it would be comfortable with it.


However, for other, non-D syntax-es I would prefer something like:

query q{
  from element in array
  where element > 2
  add element to data
}

Which can be handled by:

macro query (Context context, string dsl) {
return domainSpecificLanguageToD(dsl);
}

This in terms is already possible by writing the following, it only
allows to be written in a more readable way. And the q{ ... } notation
clearly points out that there is something special going on. Also by
passing such content as string user can implement custom (or call one of
the predefined) tokenizer/lexer/parser.

mixin(query!(q{
  from element in array
  where element > 2
  add element to data
}));


I also don't like the <[ ... ]> syntax because:
1. Like others have said, it looks very foreign.
2. I don't think there is a need to add a new syntax.

I think that string concatenation is enough (at least for now), and if
you want another syntax for templates you can write a macro for that.

For example:

macro myAssert (Context context, Ast!(bool) val, Ast!(string) str = null) {
  auto message = str ? "Assertion failure: " ~ str.eval : val.toString();
  auto msgExpr = literal(constant(message));

  return "
if (!" ~ val ~ ")
  throw new AssertError(" ~ msgExpr ~ ");
  ";

  // or
  return astTemplate q{
if (!$val)
  throw new AssertError($msgExpr);
  };
}

void main () {
myAssert(1 + 2 == 4);
}


What do you guys think?

--
Luka


Re: DIP 50 - AST macros

2013-11-11 Thread luka8088
On 10.11.2013. 22:20, Jacob Carlborg wrote:
> I've been thinking quite long of how AST macros could look like in D.
> I've been posting my vision of AST macros here in the newsgroup a couple
> of times already. I've now been asked to create a DIP out of it, so here
> it is:
> 
> http://wiki.dlang.org/DIP50
> 

Thumbs up!


Re: Aspect Oriented Programming in D

2013-10-30 Thread luka8088
On 30.10.2013. 5:05, Sumit Adhikari wrote:
> 
> Dear All,
> 
> I want to exploit D Language for "Aspect Oriented Programming".
> 
> I would like to have a discussion with people who have similar interest
> in D AOP and who possibly have implemented AOP in D.
> 
> Would be great to have a productive discussion.
> 
> Regards, Sumit

Just for reference:
http://forum.dlang.org/thread/huyqfcoosgzfneswn...@forum.dlang.org


Re: primitive value overflow

2013-05-29 Thread luka8088

On 24.5.2013. 1:58, bearophile wrote:

Peter Alexander:


What about code that relies on overflow? It's well-defined behaviour,
so it should be expected that people rely on it (I certainly do
sometimes)


Do you rely on signed or unsigned overflow?

My opinions on this topic have changed few times.

A modern system language should offer the programmer both integral types
for the rare situations where the overflow or wrap around are expected
or acceptable, and other "default" integral types to be used in all the
other situations, where overflow or wrap-around are unexpected and not
desired. The implementation then should offer ways to optionally perform
run-time tests on the second group of integrals.

A very good system language should also offer various means to
statically verify the bounds of a certain percentage of values and
expression results, to reduce the amount of run-time tests needed (here
things like "Liquid Types" help).

D currently doesn't have such safe built-in types, and it doesn't offer
means to create such efficient types in library code. I think such means
should be provided:
http://d.puremagic.com/issues/show_bug.cgi?id=9850

Bye,
bearophile


I agree completely!

Would it maybe be a good consensus to allow operator overloading and 
invariants for primitive types?


Or is there a reason why that is a bad idea?


Re: primitive value overflow

2013-05-23 Thread luka8088

On 17.5.2013. 0:23, Marco Leise wrote:

Am Thu, 16 May 2013 22:39:16 +0200
schrieb luka8088:


On 16.5.2013. 22:29, Andrej Mitrovic wrote:

On Thursday, 16 May 2013 at 20:24:31 UTC, luka8088 wrote:

Hello everyone.

Today I ran into a interesting issue. I wrote

auto offset = text1.length - text2.length;


Yeah, I don't like these bugs either. In the meantime you can swap auto
with 'sizediff_t' or 'ptrdiff_t', and then you can check if it's
non-negative.


Yes, thanks for the advice, I did something similar. =)


Now that doesn't work if you deal with some text2 that is over
2 GiB longer than text1.
My approach is to see the close relation between any offset
from beginning or length to the machine memory model. So any
byte or char array in memory naturally has an unsigned length
typed by the architecture's word size (e.g. 32 or 64 bit).
With that in mind I _only_ ever subtract two values if I know
the difference will be positive. That is the case for
file_size - valid_offset for example.
I don't know the context for your line of code, but if text1
and text2 are passed in as parameters to a function, a
contract should verify that text1 is longer (or equal) than
text2.
Now feel free to tell me I'm wrong, but with the two lengths
being natural numbers or "countable", I claim that a negative
value for your offset variable would not have been usable
anyway. It is a result that makes no sense. So on the next line
you probably check "if (offset>= 0)" which is the same as
putting "if (text1.length>= text2.length)" one line earlier
to avoid running into the situation where you can end up with
an over- or underflow because the result range of size_t -
size_t fits neither size_t nor sizediff_t.

Say text1 is 0 bytes long and text2 is 3_000_000_000 bytes
long. Then -3_000_000_000 would be the result that cannot be
stored in any 32-bit type. And thus it is important to think
about possible input to your integer calculations and place
if-else-branches there (or in-contracts), especially when the
language accepts overflows silently.
But I'd really like to see the context of your code if it is
not a secret. :)



I understand perfectly the issue that you are pointing out. But that is 
not the real issue here. I know how computer arithmetic works, the 
understanding is not the issue here. The real issue is that at the time 
of writing unsigned was never mentioned, and it never came to my mind 
that it could be the issue (until I printed that actual value). So in 
reality I made a mistake, as I sometimes make a typo mistake. And I fix 
them (as I did this one) by debugging.


What I want to point out is that this kind of mistakes can be pointed 
out to the programer in debug mode (and save programer time) by adding a 
runtime check. The only real benefit here would be shorter debug time, 
and the only real tradeoff would be slower executing in debug mode. 
Nothing else.




Re: primitive value overflow

2013-05-16 Thread luka8088

On 16.5.2013. 22:35, Mr. Anonymous wrote:

On Thursday, 16 May 2013 at 20:29:13 UTC, Andrej Mitrovic wrote:

On Thursday, 16 May 2013 at 20:24:31 UTC, luka8088 wrote:

Hello everyone.

Today I ran into a interesting issue. I wrote

auto offset = text1.length - text2.length;


Yeah, I don't like these bugs either. In the meantime you can swap
auto with 'sizediff_t' or 'ptrdiff_t', and then you can check if it's
non-negative.


It's exactly the same as checking if(text1.length > text2.length).
But the idea of checking for integer overflows in debug builds is really
nice.

P.S. I remember Microsoft had some serious bug because of an integer
overflow, that allowed a remote machine to create a denial of service.



I agree that it is exactly the same as checking if (text1.length > 
text2.length). And I don't think that this is an issues if you are aware 
of the fact that you are working with unsigned values. But in the code 
that I wrote there was no mentioning of unsigned so the possibility of 
that kind of issue never came to mind until I actually printed the 
values. And that is what I wanted to emphasize.


Re: primitive value overflow

2013-05-16 Thread luka8088

On 16.5.2013. 22:29, Andrej Mitrovic wrote:

On Thursday, 16 May 2013 at 20:24:31 UTC, luka8088 wrote:

Hello everyone.

Today I ran into a interesting issue. I wrote

auto offset = text1.length - text2.length;


Yeah, I don't like these bugs either. In the meantime you can swap auto
with 'sizediff_t' or 'ptrdiff_t', and then you can check if it's
non-negative.


Yes, thanks for the advice, I did something similar. =)



primitive value overflow

2013-05-16 Thread luka8088

Hello everyone.

Today I ran into a interesting issue. I wrote

  auto offset = text1.length - text2.length;

and in case text2 was longer then text1 I got something around 4294967291.

So I opened an issue:
http://d.puremagic.com/issues/show_bug.cgi?id=10093

I know that there is a perfectly valid reason for this behavior, and 
that this behavior is not undefined, but it is unexpected, especially 
because unsigned is never mentioned in the code. One solution that comes 
to mind is changing length to signed, but that makes no sense because 
length is never negative.


After some thinking a though came that maybe such value overflow should 
be treated the same way as array overflow and checked by druntime with 
optional disabling in production code (like array bound checks)?


I think it would be very helpful to get an error for such mistake (that 
could very easily happen by accident), and on the other hand it can be 
disabled (like all other checks).


Re: @property - take it behind the woodshed and shoot it?

2013-01-26 Thread luka8088

On 24.1.2013 9:34, Walter Bright wrote:

This has turned into a monster. We've taken 2 or 3 wrong turns somewhere.

Perhaps we should revert to a simple set of rules.

1. Empty parens are optional. If there is an ambiguity with the return
value taking (), the () go on the return value.

2. the:
f = g
rewrite to:
f(g)
only happens if f is a function that only has overloads for () and (one
argument). No variadics.

3. Parens are required for calling delegates or function pointers.

4. No more @property.


Maybe one possible issue to note. I am sorry if someone already noted 
this but I didn't saw it so here it is.


In druntime's object_.d AssociativeArray has:
@property size_t length() { return _aaLen(p); }

By removing @property typeof([].length) is no longer uint or ulong. It 
would change into uint() or ulong(). And not just for length, but any 
other properties type's would change.


I think that this is one big possible code breaker for everyone that 
uses something similar to the following:


typeof([].length) l = [].length;

Maybe I am wrong but my personal opinion is that code like this should 
compile because semantically length is a property and the fact that it 
is a functions is just a implementation detail.


Re: @property - take it behind the woodshed and shoot it?

2013-01-25 Thread luka8088

On 24.1.2013 23:04, Andrei Alexandrescu wrote:

On 1/24/13 4:56 PM, Adam Wilson wrote:

Simplicity is clearly good, but there's something to be said about
those warts in chained calls. The UFCS-enabled idioms clearly bring a
strong argument to the table, there's no ignoring it.

Andrei


Then @property needs to be fixed such that optional parens don't effect
it one way or the other. Removing the concept of properties and making
functions that look like properties through optional parens is a very
poor (and lazy) solution. As Mr. Ruppe pointed out, properties are DATA,
and functions do stuff. That statement alone is an excellent argument
for clearly delineating which is which... Properties are not functions.


I'm not all that convinced, and it's easy to get wedged into a black vs
white position that neglects many subtleties. Properties are DATA, well
except when you need to pass fields by reference etc. at which point the
syntactic glue comes unglued.

There's been a lot of strong positions years ago about functions vs.
procedures, statements vs. expressions, or even (the glorious 60s!)
in-language I/O primitives vs. I/O as library facilities. Successful
languages managed to obviate such dichotomies.


Andrei


Maybe this will clarify what is meant by "properties are data":


// http://dpaste.dzfl.pl/822aab11

auto foo () {
  return { return 0; };
}

@property auto bar () {
  return { return 0; };
}

void main () {
  auto localFoo = foo;
  assert(!is(typeof(localFoo) == typeof(foo)));
  auto localBar = bar;
  assert(is(typeof(localBar) == typeof(bar)));
}



function:
auto localFoo = foo; // call foo and store the result
typeof(localFoo) // typeof(data returned by foo) - function pointer
typeof(foo) // typeof(function foo) - function

data:
auto localBar = bar; // get data represented by bar
typeof(localBar) // typeof(data represented by bar) - function pointer
typeof(bar) // typeof(data represented by bar) - function pointer


Data represented by bar can be some value, or a function call that 
returns a value. As a user of that property I don't care. And I don't 
want my code to break if that property changes from a function pointer 
to a function that returns a function pointer. Hence typeof(foo) is a 
function, and typeof(bar) is a type of that property (type that is 
returned by a calling bar). I would like to emphasize that @property is 
not a syntactic, but rather a semantic issue.




Re: Something needs to happen with shared, and soon.

2012-11-15 Thread luka8088

On 15.11.2012 11:52, Manu wrote:

On 15 November 2012 12:14, Jacob Carlborg mailto:d...@me.com>> wrote:

On 2012-11-15 10:22, Manu wrote:

Not to repeat my prev post... but in reply to Walter's take on
it, it
would be interesting if 'shared' just added implicit lock()/unlock()
methods to do the mutex acquisition and then remove the cast
requirement, but have the language runtime assert that the object is
locked whenever it is accessed (this guarantees the safety in a more
useful way, the casts are really annying). I can't imagine a
simpler and
more immediately useful solution.


How about implementing a library function, something like this:

shared int i;

lock(i, (x) {
 // operate on x
});

* "lock" will acquire a lock
* Cast away shared for "i"
* Call the delegate with the now plain "int"
* Release the lock

http://pastebin.com/tfQ12nJB


Interesting concept. Nice idea, could certainly be useful, but it
doesn't address the problem as directly as my suggestion.
There are still many problem situations, for instance, any time a
template is involved. The template doesn't know to do that internally,
but under my proposal, you lock it prior to the workload, and then the
template works as expected. Templates won't just break and fail whenever
shared is involved, because assignments would be legal. They'll just
assert that the thing is locked at the time, which is the programmers
responsibility to ensure.



I managed to make a simple example that works with the current 
implementation:


http://dpaste.dzfl.pl/27b6df62

http://forum.dlang.org/thread/k7orpj$1tt5$1...@digitalmars.com?page=4#post-k7s0gs:241h45:241:40digitalmars.com

It seems to me that solving this shared issue cannot be done purely on a 
compiler basis but will require a runtime support. Actually I don't see 
how it can be done properly without telling "this lock must be locked 
when accessing this variable".


http://dpaste.dzfl.pl/edbd3e10


Re: Something needs to happen with shared, and soon.

2012-11-14 Thread luka8088

On 14.11.2012 20:54, Sean Kelly wrote:

On Nov 13, 2012, at 1:14 AM, luka8088  wrote:


On Tuesday, 13 November 2012 at 09:11:15 UTC, luka8088 wrote:

On 12.11.2012 3:30, Walter Bright wrote:

On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:

It's starting to get outright embarrassing to talk to newcomers about D's
concurrency support because the most fundamental part of it -- the
shared type
qualifier -- does not have well-defined semantics at all.


I think a couple things are clear:

1. Slapping shared on a type is never going to make algorithms on that
type work in a concurrent context, regardless of what is done with
memory barriers. Memory barriers ensure sequential consistency, they do
nothing for race conditions that are sequentially consistent. Remember,
single core CPUs are all sequentially consistent, and still have major
concurrency problems. This also means that having templates accept
shared(T) as arguments and have them magically generate correct
concurrent code is a pipe dream.

2. The idea of shared adding memory barriers for access is not going to
ever work. Adding barriers has to be done by someone who knows what
they're doing for that particular use case, and the compiler inserting
them is not going to substitute.


However, and this is a big however, having shared as compiler-enforced
self-documentation is immensely useful. It flags where and when data is
being shared. So, your algorithm won't compile when you pass it a shared
type? That is because it is NEVER GOING TO WORK with a shared type. At
least you get a compile time indication of this, rather than random
runtime corruption.

To make a shared type work in an algorithm, you have to:

1. ensure single threaded access by aquiring a mutex
2. cast away shared
3. operate on the data
4. cast back to shared
5. release the mutex

Also, all op= need to be disabled for shared types.



This clarifies a lot, but still a lot of people get confused with:
http://dlang.org/faq.html#shared_memory_barriers
is it a faq error ?

and also with http://dlang.org/faq.html#shared_guarantees said, I come to think 
that the fact that the following code compiles is either lack of 
implementation, a compiler bug or a faq error ?


//

import core.thread;

void main () {
  int i;
  (new Thread({ i++; })).start();
}


It's intentional.  core.thread is for people who know what they're doing, and 
there are legitimate uses along these lines:

void main() {
 int i;
 auto t = new Thread({i++;});
 t.start();
 t.join();
 write(i);
}

This is perfectly safe and has a deterministic result.


Yes, that makes perfect sense... I just wanted to point out the 
misguidance in FAQ because (at least before this forum thread) there is 
not much written about shared and you can get a wrong idea from it (at 
least I did).


Re: Const ref and rvalues again...

2012-11-14 Thread luka8088

On 13.11.2012 15:07, martin wrote:

On Tuesday, 13 November 2012 at 08:34:19 UTC, luka8088 wrote:

Your proposal isn't really related to this thread's topic


Um, "Const ref and rvalues again", I suggest it to be the default
behavior, how is this not related ?


The topic here is binding rvalues to (const) ref parameters. You, on the
other hand, are suggesting to flip the constness of by-value parameters
(int => val/mutable int, const int => int), which affects both rvalues
and lvalues (no difference between them) and only by-value parameters.


Yes, you understood correctly:
void f (const ref int x, int y, ref int z); =>
void f (int x, val int y, ref int z);

The point here is to make "We need a way for a function to declare
that it doesn't want it's argument to be copied, but it also doesn't
care whether the argument is an rvalue or an lvalue. " a default
behavior.


So now tell me why argument x wouldn't be copied. It's passed by value,
so of course it is copied (lvalues)/moved (rvalues) just as it is now.
The only difference is that the parameter won't be modified by f().

I guess what you furthermore implicate is that you'd expect the compiler
to automatically pass appropriate arguments to such parameters by
reference to avoid copying (for large structs or structs with
non-trivial copy constructors). Such a (handy!) optimization is sadly
not possible due to aliasing issues, e.g.:

int foo(ref int dst, const int src)
{
dst = 2*src;
return src;
}
// "optimized" foo():
int bar(ref int dst, const ref int src)
{
dst = 2*src;
return src;
}

int i = 1;
assert(foo(i, i) == 1 && i == 2); // okay
i = 1;
assert(bar(i, i) == 2 && i == 2); // wtf?!
// the const src parameter is actually modified since the
// original argument i is also used as mutable dst parameter!


Would it ? How many functions actually change their non ref/out
arguments ? Can you point out any existing public code that would be
broken ?


I don't want to look for examples in Phobos etc. as it should be trivial
to imagine cases such as:

void bla(float x)
{
// restrict x to safe range [0,1]
x = max(0, min(1, x));
}


I see, you are correct, if it is not copied then it can be changed 
before function finished through some other references hence it must be 
copied.


Re: Const ref and rvalues again...

2012-11-13 Thread luka8088

On 13.11.2012 11:00, Era Scarecrow wrote:

On Tuesday, 13 November 2012 at 08:34:19 UTC, luka8088 wrote:

Would it ? How many functions actually change their non ref/out
arguments ? Can you point out any existing public code that would be
broken ?


It would be possible that if the language became const-preference that a
simple regex tool could be made that would do the conversions, thereby
any code broken in this way could be un-broken just as easily; But that
assumes you aren't using mixins or magic as part of your signatures.

Somehow this reminds me a little of when I worked at a company where we
were trying out asp as a web server; The whole VB script was by default
'by ref' so you littered all your functions with 'byVal' in order for
your behavior to act as you expected.


Anyways, my take on this is consistency would be a lot more difficult
and annoying unless you had different rules for the signature vs all
other references... I doubt you would say 'this is mutable here but
immutable here' type of thing. So assuming 'mutable' is used, then
the following would be comparable...

//D as of now
int func(int x, int y, const ref int z) {
int something; //mutable far more likely
int something2;
const int lessOften;
}

//then would become...
//if x & y aren't ever changed then mutable may be unneeded.
mutable int func(mutable int x, mutable int y, ref int z) {
mutable int something;
mutable int something2;
int lessOften; //const (once set)
}

//or for inconsistancy..
//mutable or const as a return? (Or either?)
//and which would/should you use to reverse it?
int func(mutable int x, mutable int y, ref int z) {
int something; //mutable
int something2;
const int lessOften; //const
}

Seems in a function body you are far more likely to have mutable items,
while in the signature you're more likely to have const items; But
mixing them or changing how you do it would likely break code very
easily if it isn't signature only, but it doesn't seem like a good idea...

Now in the above the function may not specify 'x' is const it doesn't
guarantees it ever changes it (but it's a local copy so does it
matter?), but specifically specifying it may be more clutter than
actually useful.

All in all it seems like it would have far more confusion (and break
code) than help; although having it prefer const versions of
functions/methods to non-const ones should probably have a higher
priority (Although you then couldn't have a non-const one unless it was
as part of the struct/class constness and not the variables, (and ref
preferred over non-ref)).


Can you point out any existing public code that would be broken ?


Re: Something needs to happen with shared, and soon.

2012-11-13 Thread luka8088

On 13.11.2012 10:20, Sönke Ludwig wrote:

Am 13.11.2012 10:14, schrieb luka8088:

On Tuesday, 13 November 2012 at 09:11:15 UTC, luka8088 wrote:

On 12.11.2012 3:30, Walter Bright wrote:

On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:

It's starting to get outright embarrassing to talk to newcomers about D's
concurrency support because the most fundamental part of it -- the
shared type
qualifier -- does not have well-defined semantics at all.


I think a couple things are clear:

1. Slapping shared on a type is never going to make algorithms on that
type work in a concurrent context, regardless of what is done with
memory barriers. Memory barriers ensure sequential consistency, they do
nothing for race conditions that are sequentially consistent. Remember,
single core CPUs are all sequentially consistent, and still have major
concurrency problems. This also means that having templates accept
shared(T) as arguments and have them magically generate correct
concurrent code is a pipe dream.

2. The idea of shared adding memory barriers for access is not going to
ever work. Adding barriers has to be done by someone who knows what
they're doing for that particular use case, and the compiler inserting
them is not going to substitute.


However, and this is a big however, having shared as compiler-enforced
self-documentation is immensely useful. It flags where and when data is
being shared. So, your algorithm won't compile when you pass it a shared
type? That is because it is NEVER GOING TO WORK with a shared type. At
least you get a compile time indication of this, rather than random
runtime corruption.

To make a shared type work in an algorithm, you have to:

1. ensure single threaded access by aquiring a mutex
2. cast away shared
3. operate on the data
4. cast back to shared
5. release the mutex

Also, all op= need to be disabled for shared types.



This clarifies a lot, but still a lot of people get confused with:
http://dlang.org/faq.html#shared_memory_barriers
is it a faq error ?

and also with http://dlang.org/faq.html#shared_guarantees said, I come to think 
that the fact that
the following code compiles is either lack of implementation, a compiler bug or 
a faq error ?

//

import core.thread;

void main () {
   shared int i;
   (new Thread({ i++; })).start();
}


Um, sorry, the following code:

//

import core.thread;

void main () {
   int i;
   (new Thread({ i++; })).start();
}



Only std.concurrency (using spawn() and send()) enforces that unshared data 
cannot be pass between
threads. The core.thread module is just a low-level module that just represents 
the OS functionality.


In that case http://dlang.org/faq.html#shared_guarantees is wrong, it is 
not a correct guarantee. Or at least that should be noted there. If 
nothing else it is confusing...


Re: Something needs to happen with shared, and soon.

2012-11-13 Thread luka8088

On Tuesday, 13 November 2012 at 09:11:15 UTC, luka8088 wrote:

On 12.11.2012 3:30, Walter Bright wrote:

On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:
It's starting to get outright embarrassing to talk to 
newcomers about D's
concurrency support because the most fundamental part of it 
-- the

shared type
qualifier -- does not have well-defined semantics at all.


I think a couple things are clear:

1. Slapping shared on a type is never going to make algorithms 
on that
type work in a concurrent context, regardless of what is done 
with
memory barriers. Memory barriers ensure sequential 
consistency, they do
nothing for race conditions that are sequentially consistent. 
Remember,
single core CPUs are all sequentially consistent, and still 
have major
concurrency problems. This also means that having templates 
accept

shared(T) as arguments and have them magically generate correct
concurrent code is a pipe dream.

2. The idea of shared adding memory barriers for access is not 
going to
ever work. Adding barriers has to be done by someone who knows 
what
they're doing for that particular use case, and the compiler 
inserting

them is not going to substitute.


However, and this is a big however, having shared as 
compiler-enforced
self-documentation is immensely useful. It flags where and 
when data is
being shared. So, your algorithm won't compile when you pass 
it a shared
type? That is because it is NEVER GOING TO WORK with a shared 
type. At
least you get a compile time indication of this, rather than 
random

runtime corruption.

To make a shared type work in an algorithm, you have to:

1. ensure single threaded access by aquiring a mutex
2. cast away shared
3. operate on the data
4. cast back to shared
5. release the mutex

Also, all op= need to be disabled for shared types.



This clarifies a lot, but still a lot of people get confused 
with:

http://dlang.org/faq.html#shared_memory_barriers
is it a faq error ?

and also with http://dlang.org/faq.html#shared_guarantees said, 
I come to think that the fact that the following code compiles 
is either lack of implementation, a compiler bug or a faq error 
?


//

import core.thread;

void main () {
  shared int i;
  (new Thread({ i++; })).start();
}


Um, sorry, the following code:

//

import core.thread;

void main () {
  int i;
  (new Thread({ i++; })).start();
}



Re: Something needs to happen with shared, and soon.

2012-11-13 Thread luka8088

On 12.11.2012 3:30, Walter Bright wrote:

On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:

It's starting to get outright embarrassing to talk to newcomers about D's
concurrency support because the most fundamental part of it -- the
shared type
qualifier -- does not have well-defined semantics at all.


I think a couple things are clear:

1. Slapping shared on a type is never going to make algorithms on that
type work in a concurrent context, regardless of what is done with
memory barriers. Memory barriers ensure sequential consistency, they do
nothing for race conditions that are sequentially consistent. Remember,
single core CPUs are all sequentially consistent, and still have major
concurrency problems. This also means that having templates accept
shared(T) as arguments and have them magically generate correct
concurrent code is a pipe dream.

2. The idea of shared adding memory barriers for access is not going to
ever work. Adding barriers has to be done by someone who knows what
they're doing for that particular use case, and the compiler inserting
them is not going to substitute.


However, and this is a big however, having shared as compiler-enforced
self-documentation is immensely useful. It flags where and when data is
being shared. So, your algorithm won't compile when you pass it a shared
type? That is because it is NEVER GOING TO WORK with a shared type. At
least you get a compile time indication of this, rather than random
runtime corruption.

To make a shared type work in an algorithm, you have to:

1. ensure single threaded access by aquiring a mutex
2. cast away shared
3. operate on the data
4. cast back to shared
5. release the mutex

Also, all op= need to be disabled for shared types.



This clarifies a lot, but still a lot of people get confused with:
http://dlang.org/faq.html#shared_memory_barriers
is it a faq error ?

and also with http://dlang.org/faq.html#shared_guarantees said, I come 
to think that the fact that the following code compiles is either lack 
of implementation, a compiler bug or a faq error ?


//

import core.thread;

void main () {
  shared int i;
  (new Thread({ i++; })).start();
}




Re: Const ref and rvalues again...

2012-11-13 Thread luka8088

On 13.11.2012 2:16, martin wrote:

On Monday, 12 November 2012 at 23:38:43 UTC, luka8088 wrote:

What about making this a default behavior and introducing a new
keyword if the function wants to modify the argument but it is not ref
(pass by value) ? The reason I think that this should be a default
behavior because not many functions actually modify their arguments
and so it leaves a lot of space for optimization.

For example:

void f (int x, val int y, ref int z) {
x = 1; // x is not copied
// compiler throws an error, x is not passed by value
// and therefor could not / should not be changed
y = 2; // ok, y is copied
z = 3; // ok, z is a reference
}


Your proposal isn't really related to this thread's topic, but I


Um, "Const ref and rvalues again", I suggest it to be the default 
behavior, how is this not related ?



understand what you mean (although your code comments distract me a bit):

void f(const int x, int y, ref int z); =>
void f(int x, val/mutable int y, ref int z);



Yes, you understood correctly:
void f (const ref int x, int y, ref int z); =>
void f (int x, val int y, ref int z);

The point here is to make "We need a way for a function to declare that 
it doesn't want it's argument to be copied, but it also doesn't care 
whether the argument is an rvalue or an lvalue. " a default behavior.



I use const/in ;) parameters a lot in my code too to prevent accidental
modifications, so my function signatures may be more compact by treating
normal pass-by-value parameters as const if not denoted with a special
keyword. I guess it wouldn't be very important for optimization though
because I'd expect the optimizer to detect unchanged parameters. Anyway,
your proposal would completely break existing code.


Would it ? How many functions actually change their non ref/out 
arguments ? Can you point out any existing public code that would be 
broken ?




Re: Const ref and rvalues again...

2012-11-12 Thread luka8088
What about making this a default behavior and introducing a new keyword 
if the function wants to modify the argument but it is not ref (pass by 
value) ? The reason I think that this should be a default behavior 
because not many functions actually modify their arguments and so it 
leaves a lot of space for optimization.


For example:

void f (int x, val int y, ref int z) {
  x = 1; // x is not copied
 // compiler throws an error, x is not passed by value
 // and therefor could not / should not be changed
  y = 2; // ok, y is copied
  z = 3; // ok, z is a reference
}

On 18.10.2012 5:07, Malte Skarupke wrote:

Hello,

I realize that this has been discussed before, but so far there is no
solution and this really needs to be a high priority:

We need a way for a function to declare that it doesn't want it's
argument to be copied, but it also doesn't care whether the argument is
an rvalue or an lvalue.

The C++ way of doing this would be to declare the argument as a const &.
Apparently it is not desired that we do the same thing for const ref.

Currently, if you want that behavior, you have to write 2^n permutations
of your function, with n being the number of arguments that the function
takes.

Here's my attempt at passing a struct to a function that takes three
arguments without the struct being copied:

int copyCounter = 0;
struct CopyCounter
{
this(this) { ++copyCounter; }
}
void takeThree(ref in CopyCounter a, ref in CopyCounter b, ref in
CopyCounter c)
{
writeln("took three");
}
void takeThree(in CopyCounter a, ref in CopyCounter b, ref in
CopyCounter c)
{
takeThree(a, b, c);
}
void takeThree(ref in CopyCounter a, in CopyCounter b, ref in
CopyCounter c)
{
takeThree(a, b, c);
}
void takeThree(ref in CopyCounter a, ref in CopyCounter b, in
CopyCounter c)
{
takeThree(a, b, c);
}
void takeThree(in CopyCounter a, in CopyCounter b, ref in CopyCounter c)
{
takeThree(a, b, c);
}
void takeThree(in CopyCounter a, ref in CopyCounter b, in CopyCounter c)
{
takeThree(a, b, c);
}
void takeThree(ref in CopyCounter a, in CopyCounter b, in CopyCounter c)
{
takeThree(a, b, c);
}
void takeThree(in CopyCounter a, in CopyCounter b, in CopyCounter c)
{
takeThree(a, b, c);
}
static CopyCounter createCopyCounter()
{
return CopyCounter();
}
void main()
{
CopyCounter first;
CopyCounter second;
CopyCounter third;
takeThree(first, second, third);
takeThree(createCopyCounter(), second, createCopCounter());
assert(copyCounter == 0); // yay, works
}


My propsed solution is this:
- Make functions that take "ref in" arguments also accept rvalues.
- The user can still provide an overload that accepts an rvalue, using
the "in" keyword, and that one will be preferred over the "ref in" version.


What do you think?

Malte




Re: Something needs to happen with shared, and soon.

2012-11-12 Thread luka8088

Here i as wild idea:

//

void main () {

  mutex x;
  // mutex is not a type but rather a keyword
  // x is a symbol in order to allow
  // different x in different scopes

  shared(x) int i;
  // ... or maybe use UDA ?
  // mutex x must be locked
  // in order to change i

  synchronized (x) {
// lock x in a compiler-aware way
i++;
// compiler guarantees that i will not
// be changed outside synchronized(x)
  }

}

//

so I tried something similar with current implementation:

//

import std.stdio;

void main () {

  shared(int) i1;
  auto m1 = new MyMutex();

  i1.attachMutex(m1);
  // m1 must be locked in order to modify i1

  // i1++;
  // should throw a compiler error

  // sharedAccess(i1)++;
  // runtime exception, m1 is not locked

  synchronized (m1) {
sharedAccess(i1)++;
// ok, m1 is locked
  }

}

// some generic code

import core.sync.mutex;

class MyMutex : Mutex {
  @property bool locked = false;
  @trusted void lock () {
super.lock();
locked = true;
  }
  @trusted void unlock () {
locked = false;
super.unlock();
  }
  bool tryLock () {
bool result = super.tryLock();
if (result)
  locked = true;
return result;
  }
}

template unshared (T : shared(T)) {
  alias T unshared;
}

template unshared (T : shared(T)*) {
  alias T* unshared;
}

auto ref sharedAccess (T) (ref T value) {
  assert(value.attachMutex().locked);
  unshared!(T)* refVal = (cast(unshared!(T*)) &value);
  return *refVal;
}

MyMutex attachMutex (T) (T value, MyMutex mutex = null) {
  static __gshared MyMutex[T] mutexes;
  // this memory leak can be solved
  // but it's left like this to make the code simple
  synchronized if (value !in mutexes && mutex !is null)
mutexes[value] = mutex;
  assert(mutexes[value] !is null);
  return mutexes[value];
}

//

and another example with methods:

//

import std.stdio;

class a {
  int i;
  void increment () { i++; }
}

void main () {

  auto a1 = new shared(a);
  auto m1 = new MyMutex();

  a1.attachMutex(m1);
  // m1 must be locked in order to modify a1

  // a1.increment();
  // compiler error

  // sharedAccess(a1).increment();
  // runtime exception, m1 is not locked

  synchronized (m1) {
sharedAccess(a1).increment();
// ok, m1 is locked
  }

}

// some generic code

import core.sync.mutex;

class MyMutex : Mutex {
  @property bool locked = false;
  @trusted void lock () {
super.lock();
locked = true;
  }
  @trusted void unlock () {
locked = false;
super.unlock();
  }
  bool tryLock () {
bool result = super.tryLock();
if (result)
  locked = true;
return result;
  }
}

template unshared (T : shared(T)) {
  alias T unshared;
}

template unshared (T : shared(T)*) {
  alias T* unshared;
}

auto ref sharedAccess (T) (ref T value) {
  assert(value.attachMutex().locked);
  unshared!(T)* refVal = (cast(unshared!(T*)) &value);
  return *refVal;
}

MyMutex attachMutex (T) (T value, MyMutex mutex = null) {
  static __gshared MyMutex[T] mutexes;
  // this memory leak can be solved
  // but it's left like this to make the code simple
  synchronized if (value !in mutexes && mutex !is null)
mutexes[value] = mutex;
  assert(mutexes[value] !is null);
  return mutexes[value];
}

//

In any case, if shared itself does not provide locking and does not 
fixes problems but only points them out (not to be misunderstood, I 
completely agree with that) then I think that assigning a mutex to the 
variable is a must.


Aldo latter examples already work with current implementation I like the 
first one (or something similar to the first one) more, it looks cleaner 
and leaves space for additional optimizations.



On 12.11.2012 17:14, deadalnix wrote:

Le 12/11/2012 16:00, luka8088 a écrit :

If I understood correctly there is no reason why this should not
compile ?

import core.sync.mutex;

class MyClass {
void method () {}
}

void main () {
auto myObject = new shared(MyClass);
synchronized (myObject) {
myObject.method();
}
}



D has no ownership, so the compiler can't know what
if it is safe to do so or not.




Re: Something needs to happen with shared, and soon.

2012-11-12 Thread luka8088

If I understood correctly there is no reason why this should not compile ?

import core.sync.mutex;

class MyClass {
  void method () {}
}

void main () {
  auto myObject = new shared(MyClass);
  synchronized (myObject) {
myObject.method();
  }
}


On 12.11.2012 12:19, Walter Bright wrote:

On 11/12/2012 2:57 AM, Johannes Pfau wrote:

But there are also shared member functions and they're kind of annoying
right now:

* You can't call shared methods from non-shared methods or vice versa.
This leads to code duplication, you basically have to implement
everything twice:


You can't get away from the fact that data that can be accessed from
multiple threads has to be dealt with in a *fundamentally* different way
than single threaded code. You cannot share code between the two. There
is simply no conceivable way that "share" can be added and then code
will become thread safe.

Most of the issues you're having seem to revolve around treating shared
data access just like single threaded access, except "share" was added.
This cannot work. The compiler error messages, while very annoying, are
in their own obscure way pointing this out.

It's my fault, I have not explained share very well, and have oversold
it. It does not solve concurrency problems, it points them out.



--
struct ABC
{
Mutext mutex;
void a()
{
aImpl();
}
shared void a()
{
synchronized(mutex)
aImpl(); //not allowed
}
private void aImpl()
{

}
}
--
The only way to avoid this is casting away shared in the shared a
method, but that really is annoying.


As I explained, the way to manipulate shared data is to get exclusive
access to it via a mutex, cast away the shared-ness, manipulate it as
single threaded data, convert it back to shared, and release the mutex.




* You can't have data members be included only for the shared version.
In the above example, the mutex member will always be included, even
if ABC instance is thread local.

So you're often better off writing a non-thread safe struct and writing
a wrapper struct. This way you don't have useless overhead in the
non-thread safe implementation. But the nice instance syntax is
lost:

shared(ABC) abc1; ABC abc2;
vs
SharedABC abc1; ABC abc2;

even worse, shared propagation won't work this way;

struct DEF
{
ABC abc;
}
shared(DEF) def;
def.abc.a();



and then there's also the druntime issue: core.sync doesn't work with
shared which leads to this schizophrenic situation:
struct A
{
Mutex m;
void a() //Doesn't compile with shared
{
m.lock(); //Compiles, but locks on a TLS mutex!
m.unlock();
}
}

struct A
{
shared Mutex m;
shared void a()
{
m.lock(); //Doesn't compile
(cast(Mutex)m).unlock(); //Ugly
}
}

So the only useful solution avoids using shared:
struct A
{
__gshared Mutex m; //Good we have __gshared!
shared void a()
{
m.lock();
m.unlock();
}
}


Yes, mutexes will need to exist in a global space.




And then there are some open questions with advanced use cases:
* How do I make sure that a non-shared delegate is only accepted if I
have an A, but a shared delegate should be supported
for shared(A) and A? (calling a shared delegate from a non-shared
function should work, right?)

struct A
{
void a(T)(T v)
{
writeln("non-shared");
}
shared void a(T)(T v) if (isShared!v) //isShared doesn't exist
{
writeln("shared");
}
}


First, you have to decide what you mean by a shared delegate. Do you
mean the variable containing the two pointers that make up a delegate
are shared, or the delegate is supposed to deal with shared data?




And having fun with this little example:
http://dpaste.dzfl.pl/7f6a4ad2

* What's the difference between: "void delegate() shared"
and "shared(void delegate())"?

Error: cannot implicitly convert expression (&a.abc) of type void
delegate() shared


The delegate deals with shared data.


to shared(void delegate())


The variable holding the delegate is shared.



* So let's call it void delegate() shared instead:
void incrementA(void delegate() shared del)
/home/c684/c922.d(7): Error: const/immutable/shared/inout attributes
are only valid for non-static member functions







Re: Proposal: __traits(code, ...) and/or .codeof

2012-10-11 Thread luka8088

On Tuesday, 9 October 2012 at 19:29:34 UTC, F i L wrote:

On Tuesday, 9 October 2012 at 13:28:55 UTC, luka8088 wrote:

Is this at least similar to what you had in mind ?

[ ..code.. ]


Yes, I realized, a bit after I originally posted that, that my 
suggestion was already possible if BankType & Logger where 
mixin-templates instead of struct/classes. Thanks for the code 
example though.


I still think an built-in .codeof/.astof would be nice, but 
what D really needs to achieve this in a syntactically pleasing 
and powerful way, is 'macro' templates (like Nimrod has) which 
work on the AST directly. I doubt this is a major concern ATM 
however.


My point of making this example was to show that nothing is 
missing in D itself. You just need to be more creative. If you 
want to write in a manner more similar to your original example 
(by that I mean without mixin templates) you can use classes, 
class methods can be turned to delegates with their context 
pointer changed before execution, and then you would get the same 
effect. Also having such syntax could be very confusing because 
someone could introduce some syntax which is very similar to D 
but behaves differently and it is embedded in a way that looks 
just like a D code.


Also the idea is to have "// generic code" part in some library 
and not visible to the *user* so the rest of the code would be 
syntactically pleasing. If you check current phobos code, you 
will see that there are some examples of using mixins this way.


Please also check the comments on 
https://github.com/D-Programming-Language/dmd/pull/953 (if you 
haven't done that already).




Re: Proposal: __traits(code, ...) and/or .codeof

2012-10-09 Thread luka8088

Is this at least similar to what you had in mind ?

http://dpaste.dzfl.pl/a5dc2875

module program;

import std.stdio;

mixin template BankAccount () {
  public int amount;
  void deposit (int value) { this.amount += value; }
  void withdraw (int value) { this.amount -= value; }
  auto currentAmount () { return this.amount; }
}

mixin template Logger () {
  void deposit (int value) { writeln("User deposited money"); }
  void withdraw (int value) { writeln("User requested money 
back"); }

}

void main () {

  mixin aspect!("bank", BankAccount, Logger);

  bank b1;
  bank b2;

  b1.deposit(10);
  b1.deposit(20);
  b1.withdraw(5);
  writeln("b1.currentAmount: ", b1.currentAmount);

  b2.deposit(50);
  b2.withdraw(40);
  b2.deposit(100);
  writeln("b2.currentAmount: ", b2.currentAmount);

}


// generic code

mixin template aspect (string name, T...) {
  template aspectDispatch (string name, uint n) {
import convert = std.conv;
static if (n >= 1)
  enum aspectDispatch = ""
~ "import std.traits;\n"
~ "static if (__traits(hasMember, data.aspect_" ~ 
convert.to!string(n) ~ ", `" ~ name ~ "`))\n"
~ "  static if (!is(ReturnType!(data.aspect_" ~ 
convert.to!string(n) ~ "." ~ name ~ ") == void))\n"
~ "return data.aspect_" ~ convert.to!string(n) ~ "." 
~ name ~ "(arguments);\n"

~ "  else\n"
~ "data.aspect_" ~ convert.to!string(n) ~ "." ~ name 
~ "(arguments);\n"

~ aspectDispatch!(name, n - 1)
  ;
else
  enum aspectDispatch = "";
  }
  auto code () {
import convert = std.conv;
string ret = ""
  ~ "struct " ~ name ~ " {\n"
  ~ "  struct aspectData {\n"
;
uint i = 0;
foreach (a; T)
  ret ~= "mixin " ~ __traits(identifier, a) ~ " aspect_" 
~ convert.to!string(++i) ~ ";\n";

ret ~= ""
  ~ "  }\n"
  ~ "  aspectData data;\n"
  ~ "  auto opDispatch (string fn, args...) (args arguments) 
{\n"
  ~ "mixin(aspectDispatch!(fn, " ~ convert.to!string(i) ~ 
"));\n"

  ~ "  }\n"
  ~ "}\n"
;
return ret;
  }
  mixin(code);
}



On Thursday, 22 March 2012 at 16:00:29 UTC, F i L wrote:
So the discussions about Attributes and Aspect Oriented 
Programming (AOP) got me thinking... Basically AOP requires 
injecting code fragments together in a comprehensible way. 
Similarly, Attributes that go beyond @note (such as @GC.NoScan) 
need similar ability.


D already has the ability to mixin arbitrary code fragments at 
compile time, and to process those in useful ways through CTFE. 
Which rocks. What it lacks is the ability to reflect upon the 
actual source code due to IO limitations of CTFE. So creating a 
mixin templates which pieces together a unique object is, to my 
knowledge, currently next to impossible (and slow since you'd 
have to parse and isolate code in .d file multiple times in a 
separate process, then compile again to put it all together).


So, to quote Walter, what compelling features would it bring? 
Here's an example of a simple AOP program from the AOP wiki 
page (probably not the best implementation, but the concept is 
there):


  struct BankType
  {
void transfer() { ... }
void getMoneyBack() { ... }
  }

  struct Logger
  {
void transfer() {
  log("transferring money...");
}
void getMoneyBack() {
  log("User requested money back");
}
  }

and now some magic...

  string bankCode(T...)(T aspects) {
auto code = "struct Bank {";
auto members = [__traits(allMembers, Bank)];
foreach (m; members) {
  code ~= "void "~m~"() {";
  code ~= __traits(getMember, Bank, m).codeof;
  foreach (a; aspects) {
if (__traits(hasMember, a, m) {
  code ~= __traits(getMember, a, m).codeof;
}
  }
  code ~= "}"
}
return code ~ "}";
  }

  mixin template Bank(T...)
  {
mixin(bankCode(T));
  }

  mixin Bank!Logger;

  void main() {
auto b = Bank();
b.transfer(); // logs
b.getMoneyBack(); // ditto
  }

So this would allow us to make "Compilers" within the Compiler 
(Codeception), since we could parse/strip/append any existing 
code fragments together in endless combination. Generic 
"Builders" could probably be built and put into a std.builder 
lib for general use.


One particular use I have in mind is for Behavior Objects (Game 
Scripts). Each behavior would hold Property(T) objects which 
define per-property, per-state "binding" dependencies (eg. 
position.x.bind(other.x, State.Idle)) and execution code. On 
release, the Property(T) object would be stripped away (leaving 
just T) and it's behavior code "compressed" with others into 
optimized functions.


I don't know much about the internals of DMD, so I'm not sure 
this is a realistic request, but I think the idea is 
compelling. Also, for Attributes I'm not sure this technique is 
really applicable. But it's possible that the compiler could 
exploit this internally for certain Attributes like @GC.whatever





Re: Setting defaults to variadic template args

2012-10-02 Thread luka8088

And the simplest solution wins:

module program;

import std.stdio;
import std.traits;
import std.typetuple;

struct Event (T1 = int, T2 = float, Telse...) {

  alias TypeTuple!(T1, T2, Telse) T;

  void F (T args) {
writeln(typeid(typeof(args)));
  }

}

void main () {

  Event!() e1;

  e1.F(5, 6);

}

I hope that you found the solution that you where looking for.

On Tuesday, 2 October 2012 at 13:49:56 UTC, luka8088 wrote:

Or maybe... This seems like a much better solution:


module program;

import std.stdio;
import std.traits;
import std.typetuple;

template SelectTrue (bool condition, T) {
  static if (condition)
alias T SelectTrue;
}

struct Event (T...) {

  alias TypeTuple!(T,
SelectTrue!(T.length < 1, int),
SelectTrue!(T.length < 2, float),
  ) T2;

  void F (T2 args) {
writeln(typeid(typeof(args)));
  }

}

void main () {

  Event!() e1;

  e1.F(5, 6);

}



On Tuesday, 2 October 2012 at 13:44:10 UTC, luka8088 wrote:


module program;

import std.stdio;
import std.traits;
import std.typetuple;

struct Event (T...) {

 alias EraseAll!(void, TypeTuple!(T,
   Select!(T.length < 1, int, void),
   Select!(T.length < 2, float, void),
 )) T2;

 void F (T2 args) {
   writeln(typeid(typeof(args)));
 }

}

void main () {

 Event!() e1;

 e1.F(5, 6);

}


On Tuesday, 2 October 2012 at 13:15:08 UTC, Manu wrote:

Is it possible?

Eg:
struct Event(T... = (int, float))
{
  void F(T...); // <- should default to F(int, float)
}

Does anyone have any clever tricks that will work in this 
scenario? Some

magic tuple syntax?





Re: Setting defaults to variadic template args

2012-10-02 Thread luka8088

Or maybe... This seems like a much better solution:


module program;

import std.stdio;
import std.traits;
import std.typetuple;

template SelectTrue (bool condition, T) {
  static if (condition)
alias T SelectTrue;
}

struct Event (T...) {

  alias TypeTuple!(T,
SelectTrue!(T.length < 1, int),
SelectTrue!(T.length < 2, float),
  ) T2;

  void F (T2 args) {
writeln(typeid(typeof(args)));
  }

}

void main () {

  Event!() e1;

  e1.F(5, 6);

}



On Tuesday, 2 October 2012 at 13:44:10 UTC, luka8088 wrote:


module program;

import std.stdio;
import std.traits;
import std.typetuple;

struct Event (T...) {

  alias EraseAll!(void, TypeTuple!(T,
Select!(T.length < 1, int, void),
Select!(T.length < 2, float, void),
  )) T2;

  void F (T2 args) {
writeln(typeid(typeof(args)));
  }

}

void main () {

  Event!() e1;

  e1.F(5, 6);

}


On Tuesday, 2 October 2012 at 13:15:08 UTC, Manu wrote:

Is it possible?

Eg:
 struct Event(T... = (int, float))
 {
   void F(T...); // <- should default to F(int, float)
 }

Does anyone have any clever tricks that will work in this 
scenario? Some

magic tuple syntax?





Re: Setting defaults to variadic template args

2012-10-02 Thread luka8088


module program;

import std.stdio;
import std.traits;
import std.typetuple;

struct Event (T...) {

  alias EraseAll!(void, TypeTuple!(T,
Select!(T.length < 1, int, void),
Select!(T.length < 2, float, void),
  )) T2;

  void F (T2 args) {
writeln(typeid(typeof(args)));
  }

}

void main () {

  Event!() e1;

  e1.F(5, 6);

}


On Tuesday, 2 October 2012 at 13:15:08 UTC, Manu wrote:

Is it possible?

Eg:
  struct Event(T... = (int, float))
  {
void F(T...); // <- should default to F(int, float)
  }

Does anyone have any clever tricks that will work in this 
scenario? Some

magic tuple syntax?