Re: DIP 88: Simple form of named parameters

2016-01-24 Thread Michel Fortin via Digitalmars-d

On 2016-01-23 14:19:03 +, Jacob Carlborg <d...@me.com> said:


This is mostly to prevent ugly hacks like Flag [1].

http://wiki.dlang.org/DIP88

[1] https://dlang.org/phobos/std_typecons.html#.Flag


On further thought, how do you make templates with specialization take 
named arguments?


template TFoo(T){ ... } // #1
template TFoo(T : T[])  { ... } // #2
template TFoo(T : char) { ... } // #3

My guess would be this:

template TFoo(T:){ ... } // #1
template TFoo(T: : T[])  { ... } // #2
template TFoo(T: : char) { ... } // #3

... but it makes the declaration a bit strange.

--
Michel Fortin
http://michelf.ca



Re: DIP 88: Simple form of named parameters

2016-01-24 Thread Michel Fortin via Digitalmars-d

On 2016-01-23 14:19:03 +, Jacob Carlborg <d...@me.com> said:


This is mostly to prevent ugly hacks like Flag [1].

http://wiki.dlang.org/DIP88

[1] https://dlang.org/phobos/std_typecons.html#.Flag


Interesting.

This is somewhat similar to an experiment of mine from 5 years ago. My 
implementation was a bit of a hack, but my intent was to have a proof 
of concept done quickly. All arguments were considered optionally named 
in this experiment (there was no syntax to enable this in the function 
prototype). Also it didn't extend to templates arguments.


You can my discussion with Walter about it here:
https://github.com/michelf/dmd/commit/673bae4982ff18a3d216bc1578f50d40f4d26d7a

Mixing reordering with overloading can become quite complicated. It's 
good that you leave that out at first. Adding reordering should be left 
as a separate task for later, if desired. Small steps.


Have you considered supporting separate variable names? Like this:

void login(string username: name, string password:) {
writeln(name ~ ": " ~ password);
}

where an identifier following the colon, if present, would be the name 
of the variable inside the function. That way you can use a shorter 
variable when it makes sense to do so. Or change the variable name 
without affecting the API.


--
Michel Fortin
http://michelf.ca



Re: OT: Swift is now open source

2015-12-06 Thread Michel Fortin via Digitalmars-d

On 2015-12-06 10:43:24 +, Jacob Carlborg <d...@me.com> said:


You're decoding characters (grapheme clusters) as you advance those indexes.


Not really what I needed, for me it would be enough with slicing the bytes.


That only works if the actual underlying representation is UTF8 (or 
other single-byte encoding). String abstracts that away from you. But 
you can do this if you want to work with bytes:


let utf8View = str.utf8
	utf8View[utf8View.startIndex.advancedBy(2) ..< 
utf8View.endIndex.advancedBy(-1)]


or:

let arrayOfBytes = Array(str.utf8)
arrayOfBytes[2 ..< arrayOfBytes.count-1]



It's called indexOf. (Remember, the index type is an iterator.) It does
return an optional. It will work for any type conforming to the
ContainerType protocol where Element conforms to Equatable. Like this:

 let str = "Hello, playground"
 let start = str.unicodeScalars.indexOf("p")!
 let end = str.unicodeScalars.indexOf("g")!
 str.unicodeScalars[start ..< end] // "play"
 str.unicodeScalars[start ... end] // "playg"


I was looking for a method to return the first element matching a predicate.


container.indexOf(predicate)
container.indexOf { (element) in element == "p" }
container.indexOf { $0 == "p" }

If it's an iterator I would expect to be able to get the value it 
points to. I can't see how I can do that with an Index in Swift.


container[index]

The index is an iterator in the sense that it points at one location in 
the container and apply some container-released logic as you advance. 
But you still have to use the container to access its value. The index 
does not expose the value even when it knows about it internally.


Not all index types are like that. Containers with random access 
normally use Int as their index type because it's sufficient and 
practical.



--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: OT: Swift is now open source

2015-12-05 Thread Michel Fortin via Digitalmars-d

On 2015-12-04 07:51:32 +, Jacob Carlborg <d...@me.com> said:


On 2015-12-03 20:10, Steven Schveighoffer wrote:


The truth is, swift is orders of magnitude better than Objective C.

I have gotten used to the nullable API, though it sometimes seems more
clunky than useful.


I find it very clunky as well. Sometimes it's too strict. I was a bit 
surprised when I noticed that converting from an integer to an enum 
returned an optional.


It's about preserving the invariants of the enum type, namely that 
it'll always have one of the allowed values. Append ! after the 
expression to unwrap the optional if you don't care about what happens 
when the value is invalid.



Apple's API is still rather verbose and hard to discover, but that is 
not swift's fault.


They could have gone the D route by separating the method name from the 
selector:


extern(Objective-C) class Foo
{
 void bar() @selector("thisIsMyReallyLongSelector:withAnotherSelector:");
}


You can do that in Swift too with @objc(some:selector:). And for Swift 
3 they do plan to give Swift-specific names to pretty much all methods 
in the Apple frameworks.

https://github.com/apple/swift-evolution/blob/master/proposals/0005-objective-c-name-translation.md


And 


the lack of semi-colons has poisoned me from writing syntactically
valid lines in D :)

I miss D's algorithms and range API when working with swift. A lot. I've
tried to use their sequence API, but it's very confusing.


I have not used it much, but I think it it's quite alright. But it's 
ridicule complicated to slice a string in Swift compared to D. One 
needs to pass in a range of a specific index type.


var str = "Hello, playground"
str.substringWithRange(Range(start: 
str.startIndex.advancedBy(2), end: str.endIndex.advancedBy(-1))) 
//"llo, playgroun"


You can be less verbose if you want:

let str = "Hello, playground"
str[str.startIndex.advancedBy(2) ..< str.endIndex.advancedBy(-1)]

Also note that those special index types are actually iterators. You're 
decoding characters (grapheme clusters) as you advance those indexes.



[...] Swift doesn't even have "find" (as far as I can see) the use of 
optional types would be perfect here.


It's called indexOf. (Remember, the index type is an iterator.) It does 
return an optional. It will work for any type conforming to the 
ContainerType protocol where Element conforms to Equatable. Like this:


let str = "Hello, playground"
let start = str.unicodeScalars.indexOf("p")!
let end = str.unicodeScalars.indexOf("g")!
str.unicodeScalars[start ..< end] // "play"
str.unicodeScalars[start ... end] // "playg"


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Firs step of D/Objective-C merged

2015-07-14 Thread Michel Fortin via Digitalmars-d-announce

On 2015-07-14 18:53:31 +, Jacob Carlborg d...@me.com said:

Hmm, I see. I imagined something similar would need to be done for the 
new exception handling in Swift 2, but for every method, that was 
unexpected. Now when Swift goes open source someone can just have a 
look and see what's going on :)


I'd be surprised if error handling had something to do with those 
thunks (they were there in Swift 1.2 after all). Error handling in 
Swift 2 is mostly a lowering of Cocoa's NSError handling pattern merged 
into the calling convention, where the compiler generates the necessary 
code in each function to propagate errors.


Objective-C exceptions are still fatal errors when they reach Swift 2 code.

I think those thunks are simply there so that Swift code can use Swift 
calling conventions everywhere, which makes things simpler on the Swift 
side and enables some optimizations that would not be allowed with 
Objective-C.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Firs step of D/Objective-C merged

2015-07-14 Thread Michel Fortin via Digitalmars-d-announce

On 2015-07-14 13:59:51 +, Jacob Carlborg d...@me.com said:


On 2015-07-14 03:11, Michel Fortin wrote:


More or less. If you define an `extern (Objective-C)` class in D (once
all my work is merged), you can use it and derive from it in Swift. If
you define an `@objc` class in Swift you can use it from Objective-C and
from D, but you can't derive from it.


Do you know why you can't derive from it?


I'm not sure. Apple's documentation says: You cannot subclass a Swift 
class in Objective-C. I assume there could be a variety of reasons, 
such as more aggressive optimizations.




Note that the Swift ABI isn't stable yet. So the above might change at
some point.


But they need to follow the Objective-C ABI, for the @objc classes. I 
guess they technically can change the Objective-C ABI if they want to.


Actually, they only need to follow it up what they guaranty will work. 
If you debug some Swift code mixed with Objective-C, you'll notice that 
every call to a Swift method from Objective-C first passes through a 
thunk function (not dissimilar to my first D/Objective-C bridge made 
using D template mixins). Now, suppose you override this function from 
the Objective-C side, only the thunk gets overriden and Swift calles 
and Objective-C callers will no longer call the same function.


I haven't verified anything, but that's my theory.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Firs step of D/Objective-C merged

2015-07-13 Thread Michel Fortin via Digitalmars-d-announce

On 2015-07-13 14:02:54 +, Steven Schveighoffer schvei...@yahoo.com said:

I wanted to ask, swift can call into objective-C, so does this de-facto 
give us a D/swift binding as well? I haven't written a single line of 
swift yet, so apologies if this is a stupid question.


More or less. If you define an `extern (Objective-C)` class in D (once 
all my work is merged), you can use it and derive from it in Swift. If 
you define an `@objc` class in Swift you can use it from Objective-C 
and from D, but you can't derive from it.


Note that the Swift ABI isn't stable yet. So the above might change at 
some point.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: As discussed in DConf2015: Python-like keyword arguments

2015-05-31 Thread Michel Fortin via Digitalmars-d

On 2015-05-31 04:08:33 +, ketmar ket...@ketmar.no-ip.org said:


my work now allows this:
  string test (string a, string b=3Dwow, string c=3Dheh) {
return a~b~c;
  }

  void main () {
enum str =3D test(c: cc, a: aa);
assert(str =3D=3D aawowcc);
  }


How does it handle overloading?

string test(bool a, string b=wow, string c=heh) {}
string test(bool a, string c=heh, bool d=true) {}

test(a: true, c: hi); // ambiguous!

The irony of this example is that without argument names (or more 
precisely without reordering), there'd be no ambiguity here.




and this:
  void test(A...) (A a) {
import std.stdio;
foreach (auto t; a) writeln(t);
  }

  void main () {
test(x: 33.3, z: 44.4, a: , , d:Yehaw);
  }


For that to be really useful the argument names should be part of the 
A type so you can forward them to another function and it still 
works. For instance:


void test(string a, string b=wow, string c=heh) {}

void forward(A...)(A a) {
test(a);
}

void main() {
forward(c: cc, a: aa);
}


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: As discussed in DConf2015: Python-like keyword arguments

2015-05-30 Thread Michel Fortin via Digitalmars-d

On 2015-05-29 12:27:02 +, Jacob Carlborg d...@me.com said:

And here's an implementation with language support which allows named 
arguments but not reordering the arguments [2]. Originally implemented 
by Michel Fortin.


[2] https://github.com/jacob-carlborg/dmd/tree/named_parameters


I didn't know you revived that thing too. Nice.

Make sure you take note of the related comments between Walter and me here:
https://github.com/michelf/dmd/commit/673bae4982ff18a3d216bc1578f50d40f4d26d7a

At some point my plans about this changed and I wanted to implement 
named arguments differently so it'd work for template arguments too. 
But I never go to it.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: A few thoughts on std.allocator

2015-05-13 Thread Michel Fortin via Digitalmars-d

On 2015-05-12 17:21:03 +, Steven Schveighoffer schvei...@yahoo.com said:

Of course, array appending is an odd duck here, as generally you are 
not generally able to add data to an immutable piece of data.


A similar odd duck would be reference counting (again, mutable metadata 
attached to immutable data).


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: A few thoughts on std.allocator

2015-05-10 Thread Michel Fortin via Digitalmars-d
On 2015-05-10 09:50:00 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



3. Thread-local vs. shared objects

Currently in D it's legal to allocate memory in one thread and 
deallocate it in another. (One simple way to look at it is casting to 
shared.) This has a large performance cost that only benefits very few 
actual cases.


It follows that we need to change the notion that you first allocate 
memory and then brand it as shared. The will be shared knowledge must 
be present during allocation, and use different methods of allocation 
for the two cases.


Shared is implicit in the case of immutable. Think carefully: if you 
implement this and it has any efficiency benefit for non-shared 
allocations, const-allocated objects and arrays will become more 
performant than immutable-allocated ones. People will thus have an 
incentive to stay away from immutable.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: std.xml2 (collecting features)

2015-05-03 Thread Michel Fortin via Digitalmars-d
On 2015-05-03 17:39:46 +, Robert burner Schadek 
rburn...@gmail.com said:


std.xml has been considered not up to specs nearly 3 years now. Time to 
build a successor. I currently plan the following featues for it:


- SAX and DOM parser
- in-situ / slicing parsing when possible (forward range?)
- compile time switch (CTS) for lazy attribute parsing
- CTS for encoding (ubyte(ASCII), char(utf8), ... )
- CTS for input validating
- performance

Not much code yet, I'm currently building the performance test suite 
https://github.com/burner/std.xml2


Please post you feature requests, and please keep the posts DRY and on topic.


This isn't a feature request (sorry?), but I just want to point out 
that you should feel free to borrow code from 
https://github.com/michelf/mfr-xml-d  There's probably a lot you can 
reuse in there.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: C++/C mangleof inconsistency for OS X

2015-04-22 Thread Michel Fortin via Digitalmars-d

On 2015-04-22 16:32:55 +, Dan Olson zans.is.for.c...@yahoo.com said:


Since the compile chain knows whether the target system prepends an
underscore, I wonder if it can be bubbled up into some compiler trait or
version that prevents writing versioned code based on system.  I think
it would help gdc much as it has many targets, and many prepend
underscore to symbols.  It would help with unittests like
compilable/cppmangle.d that would have to be tailored for every OS.
that uses gcc3 style C++ mangling.


And you think it's safe to assume all symbols on OS X will always have 
an underscore as a prefix just because symbols for C and C++ stuff do? 
(Hint: some Objective-C symbols don't start with _.)


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: C++/C mangleof inconsistency for OS X

2015-04-21 Thread Michel Fortin via Digitalmars-d

On 2015-04-21 18:29:36 +, Jacob Carlborg d...@me.com said:


On 2015-04-21 19:01, Dan Olson wrote:


If I want to call a C function void debug(const char*) from a C library,
I would do this because of D debug keyword:

   pragma(mangle, debug)
   extern (C) void debug_c(const(char*));

Now I would think debug_c.mangleof - debug
(and that is indeed what dmd produces even on OS X).


Are there use cases where one would want to use some other mangling 
than C? I mean, D is a system programing language.


Apple does this in many of its own C headers. Lookup the definition of 
pthread_join for instance, you'll see the __DARWIN_ALIAS macro which 
when expanded under certain circumstances adds a suffix to the symbol 
name in a similar way to pragma(mangle) in D. This allows some fixes to 
only apply to code compiled with newer SDKs. (Also note that the 
underscore is explicitly put there by the macro.)


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: C++/C mangleof inconsistency for OS X

2015-04-21 Thread Michel Fortin via Digitalmars-d

On 2015-04-21 17:01:51 +, Dan Olson zans.is.for.c...@yahoo.com said:


Dan Olson zans.is.for.c...@yahoo.com writes:


Jacob Carlborg d...@me.com writes:


On 2015-04-20 18:33, Dan Olson wrote:

An observation on OSX w/ 2.067: mangleof for C++ (and D) names produces
the actual object file symbol while mangleof for C names strips a
leading underscore.

Is this intended?  If so what is rationale?


I don't think it's intentional. The point of mangleof is to evaluate
to the actual mangled name, as it appears in the object file.


Thanks Jacob.

In that case, mangleof for extern(C) names on OS X and other systems
that add a leading underscore should include the underscore.

extern(C) int x;
version(linux) pragma(msg, foo.mangleof); // x
version(OSX) pragma(msg, foo.mangleof);   // _x

I'm trying to understand because ldc is different than dmd, and it is
related to proper debugging on systems with leading underscores.
pragma(mangle, name) is wrapped up in this too.  This needs to be right
to help D expand to other systems.


Hmmm, I can see another point of view where mangleof should produce the
equivalent extern(C) symbol.  My gut says this is the way it should
work.

If I want to call a C function void debug(const char*) from a C library,
I would do this because of D debug keyword:

  pragma(mangle, debug)
  extern (C) void debug_c(const(char*));

Now I would think debug_c.mangleof - debug
(and that is indeed what dmd produces even on OS X).

On systems which prepend an underscore, we want compiler to take care of
this so code is portable, otherwise code must do this:

version (OSX)
  pragma(mangle, _debug) extern (C) void debug_c(const(char*));
else
  pragma(mangle, debug) extern (C) void debug_c(const(char*));


I think if you specify the mangling most of the time it's because you 
don't want the compiler to do it for you. But you should consider doing 
this:


string mangleC(string name) {
version (OSX) return _ ~ name;
else return name;
}

pragma(mangle, mangleC(debug)) extern (C) void debug_c(const(char*));

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Any D plugins for recent Xcode?

2015-04-19 Thread Michel Fortin via Digitalmars-d

On 2015-04-19 00:28:14 +, Michel Fortin michel.for...@michelf.ca said:


On 2015-04-18 20:18:56 +, Dan Olson zans.is.for.c...@yahoo.com said:


Yeah, I was hoping somone might have done the same for Xcode 6.  I've
never poked at how Xcode plugins works, maybe somehow it can be
upgraded.  I have gotten the impression that the Xcode plugin API
changes often and is undocumented, making it hard to maintain something
non-Apple.


It's undocumented API, and they sometime change it although not that 
much. Xcode 4 broke the plugin and I didn't put much effort into 
figuring out what was wrong. Feel free to fork and fix it if you want, 
the code is on Github.

https://github.com/michelf/d-for-xcode/


I just realized, I have some unpushed changes on my computer that 
aren't on Github. I probably never pushed them because it didn't work 
well enough, but it might be closer to something that works. If someone 
want to sort things out, I pushed those to a separate branch 
xcode-4-work.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Any D plugins for recent Xcode?

2015-04-18 Thread Michel Fortin via Digitalmars-d

On 2015-04-18 20:18:56 +, Dan Olson zans.is.for.c...@yahoo.com said:


Yeah, I was hoping somone might have done the same for Xcode 6.  I've
never poked at how Xcode plugins works, maybe somehow it can be
upgraded.  I have gotten the impression that the Xcode plugin API
changes often and is undocumented, making it hard to maintain something
non-Apple.


It's undocumented API, and they sometime change it although not that 
much. Xcode 4 broke the plugin and I didn't put much effort into 
figuring out what was wrong. Feel free to fork and fix it if you want, 
the code is on Github.

https://github.com/michelf/d-for-xcode/


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: SDC needs you

2015-04-17 Thread Michel Fortin via Digitalmars-d

On 2015-04-17 02:19:49 +, Walter Bright newshou...@digitalmars.com said:


On 4/16/2015 10:47 AM, Michel Fortin wrote:

It would be sad to see all those efforts wasted.


Yes it would. The problem is I have a hard time reviewing complex 
things I don't understand, so I procrastinate. The fault is mine, not 
with your work.


I see. Well, you shouldn't blame yourself for being uncomfortable 
reviewing an invasion of alien concepts coming from another language. I 
guess it should have been expected. But now that we've nailed what 
stalls the review process, we can try to find a solution.


Here's an idea: instead of doing the review all by yourself, Jacob and 
you (or alternatively you and me) could do a Skype review with screen 
sharing where you scroll through the changes on Github and get every 
line and every new concept explained to you as you go. That should help 
make things understandable. And you can always do a second pass all by 
yourself to inspect the details later, if you wish.


Would that make you more confortable?

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: SDC needs you

2015-04-16 Thread Michel Fortin via Digitalmars-d

On 2015-04-16 06:50:35 +, Jacob Carlborg d...@me.com said:

I've been working on the Objective-C support for quite a while. I'm on 
my third rewrite due to comments in previous pull requests. The latest 
pull request [1] was created in January, it's basically been stalled 
since February due to lack of review and Walter has not made a single 
comment at all in this pull request.


I did the rewrites to comply with the requests Walter made in previous 
pull requests. Although not present as a bugzilla issue with the 
preapproved tag, I did interpreted it as preapproved based on a forum 
post made by you [2].


I know that focus has shifted to GC, reference counting, C++ and so on, 
but you're not making it easy for someone to contribute.


[1] https://github.com/D-Programming-Language/dmd/pull/4321
[2] http://forum.dlang.org/post/lfoe82$17c0$1...@digitalmars.com


Back at the time I was working on D/Objective-C, my separate work on a 
feature proposed in pull #3 (that const(Object)ref thing) got a similar 
treatment: no comment from Walter in months. It's time-consuming to 
maintain a complex pull request against a changing master branch, and 
it was abandoned at some point because I got tired of maintaining it 
with no review in sight.


Using Github was a new thing back then, so I didn't necessarily expect 
the review to go smoothly given #3 isn't a trivial change. But getting 
no comment at all made me rethink things. It made me dread a similar 
fate would await D/Objective-C. It was one of the reasons I stopped 
working on it. Now that Jacob has taken over the Herculean task of 
making it work with current DMD after a few years of falling behind and 
of refactoring it as a series of pull requests by sub-feature to make 
it easier to review, I fear more and more it'll get the same treatment 
as #3, ignored by Walter for several months (that's where we are now) 
and then abandoned (when Jacob patience and/or spare time runs out).


It would be sad to see all those efforts wasted.

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP77 - Fix unsafe RC pass by 'ref'

2015-04-10 Thread Michel Fortin via Digitalmars-d

On 2015-04-10 21:29:19 +, Walter Bright newshou...@digitalmars.com said:


On 4/10/2015 2:11 PM, Martin Nowak wrote:

On 04/09/2015 01:10 AM, Walter Bright wrote:

http://wiki.dlang.org/DIP77


In the first problem example:

  struct S {
  RCArray!T array;
  }
  void main() {
  auto s = S(RCArray!T([T()])); // s.array's refcount is now 1
  foo(s, s.array[0]);   // pass by ref
  }
  void foo(ref S s, ref T t) {
  s.array = RCArray!T([]);  // drop the old s.array
  t.doSomething();  // oops, t is gone
  }

What do you do to pin s.array?

auto tmp = s;

or

auto tmp = s.array;



The latter.


And how is it pinned in this case?

 struct S {
 private RCArray!T array;
 ref T opIndex(int index) return { return array[index]; }
 void clear() { s.array = RCArray!T([]); }
 }
 void main() {
 auto s = S(RCArray!T([T()])); // s.array's refcount is now 1
 foo(s, s[0]);   // pass by ref
 }
 void foo(ref S s, ref T t) {
 s.clear();// drop the old s.array
 t.doSomething();  // oops, t is gone
 }

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP77 - Fix unsafe RC pass by 'ref'

2015-04-10 Thread Michel Fortin via Digitalmars-d

On 2015-04-10 23:22:17 +, deadalnix deadal...@gmail.com said:


On Friday, 10 April 2015 at 23:18:59 UTC, Martin Nowak wrote:

On 04/10/2015 11:29 PM, Walter Bright wrote:


The latter.


Can you update that part in the DIP, it wasn't clear that the temporary
selectively pins RCO fields of a normal struct passed by ref.


If a struct has RCO fields, shouldn't it be an RCO itself, and as such 
be pinned ?


Not necessarily. A @disabled postblit could make it no longer RCO 
(including a @disabled postblit in one of the fields).



It sounds like this is implied in the DIP.


That's what I thought too. But when confronted to a case where that 
wouldn't work Walter said in this thread that the compiler would make a 
temporary of the fields. So I'm not too sure what to think anymore. The 
DIP should clarify what happens with @disabled postblit and RCO fields 
inside non-RCO structs.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP77 - Fix unsafe RC pass by 'ref'

2015-04-09 Thread Michel Fortin via Digitalmars-d

On 2015-04-08 23:10:37 +, Walter Bright newshou...@digitalmars.com said:


http://wiki.dlang.org/DIP77


In the definition of a Reference Counted Object:


An object is assumed to be reference counted if it has a postblit and a 
destructor, and does not have an opAssign marked @system.



Why should it not have an opAssign marked @system?

And what happens if the struct has a postblit but it is @disabled? Will 
the compiler forbid you from passing it by ref in cases where it'd need 
to make a copy, or will it just not be a RCO?


More generally, is it right to add implicit copying just because a 
struct has a postblit and a destructor? If someone implemented a 
by-value container in D (such as those found in C++), this behaviour of 
the compiler would trash the performance by silently doing useless 
unnecessary copies. You won't even get memory-safety as a benefit: if 
the container allocates from the GC it's safe anyway, otherwise you're 
referencing deallocated memory with your ref parameter (copying the 
struct would just make a copy elsewhere, not retain the memory of the 
original).


I think you're assuming too much from the presence of a postblit and a 
destructor. This implicit copy behaviour should not be trigged by 
seemingly unrelated clues. Instead of doing that:


auto tmp = rc;

the compiler should insert this:

auto tmp = rc.opPin();

RCArray can implement opPin by returning a copy of itself. A by-value 
container can implement opPin by returning a dummy struct that retains 
the container's memory until the dummy struct's destructor is called. 
Alternatively someone could make a dummy void opPin() @system {} to 
signal it isn't safe to pass internal references around (only in system 
code would the implicit call to opPin compile). If you were writing a 
layout-compatible D version of std::vector, you'd likely have to use a 
@system opPin because there's no way you can pin that memory and 
guaranty memory-safety when passing references around.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: RCArray is unsafe

2015-03-03 Thread Michel Fortin via Digitalmars-d

On 2015-03-02 05:57:12 +, Walter Bright said:


On 3/1/2015 12:51 PM, Michel Fortin wrote:
That's actually not enough. You'll have to block access to global 
variables too:


Hmm. That's not so easy to solve.


Let's see. The problem is that 'ref' variables get invalidated by some 
operations. Perhaps we could just tell the compiler that doing this or 
that will makes 'ref' variables unsafe after that point. Let's try 
adding a @refbreaking function attribute, and apply it to 
RCArray.opAssign:


S s;

void main() {
s.array = RCArray!T([T()]);
foo(s.array[0]);
}
void foo(ref T t) {
t.doSomething(); // all is fine
		s.array = RCArray!T([]); // here, RCArray.opAssign would be labeled 
@refbreaking
		t.doSomething(); // cannot use 'ref' variable after doing a 
refbreaking operation

}

Also, the above shouldn't compile anyway because @refbreaking would 
need to be transitive, and it follows that `foo` would need to be 
@refbreaking too:


void foo(ref T t) @refbreaking {
...
}

which in turn means that `main` too needs to be @refbreaking.

So what needs to be @refbreaking? Anything that might deallocate. This 
includes `opRelease` if it deallocates when the counter reaches zero. 
Although you could implement `opRelease` in a way that sends the memory 
block to an autorelease pool of some kind, in which case draining the 
autorelease pool at a later point would be @refbreaking.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: RCArray is unsafe

2015-03-03 Thread Michel Fortin via Digitalmars-d

On 2015-03-03 22:39:12 +, Michel Fortin said:

Let's see. The problem is that 'ref' variables get invalidated by some 
operations. Perhaps we could just tell the compiler that doing this or 
that will makes 'ref' variables unsafe after that point. Let's try 
adding a @refbreaking function attribute, and apply it to 
RCArray.opAssign:


S s;

void main() {
s.array = RCArray!T([T()]);
foo(s.array[0]);
}
void foo(ref T t) {
t.doSomething(); // all is fine
		s.array = RCArray!T([]); // here, RCArray.opAssign would be labeled 
@refbreaking
		t.doSomething(); // cannot use 'ref' variable after doing a 
refbreaking operation

}

Also, the above shouldn't compile anyway because @refbreaking would 
need to be transitive, and it follows that `foo` would need to be 
@refbreaking too:


void foo(ref T t) @refbreaking {
...
}

which in turn means that `main` too needs to be @refbreaking.

So what needs to be @refbreaking? Anything that might deallocate. This 
includes `opRelease` if it deallocates when the counter reaches zero. 
Although you could implement `opRelease` in a way that sends the memory 
block to an autorelease pool of some kind, in which case draining the 
autorelease pool at a later point would be @refbreaking.


And giving it some more thought, @refbreaking also has the interesting 
property that any pair of opAddRef/opRelease with no @refbreaking call 
between them can be elided safely.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Making RCSlice and DIP74 work with const and immutable

2015-03-01 Thread Michel Fortin via Digitalmars-d

On 2015-03-01 01:40:42 +, Andrei Alexandrescu said:

Tracing garbage collection can afford the luxury of e.g. mutating data 
that was immutable during its lifetime.


Reference counting needs to make minute mutations to data while 
references to that data are created. In fact, it's not mutation of the 
useful data, the payload of a data structure; it's mutation of 
metadata, additional information about the data (i.e. a reference count 
integral).


The RCOs described in DIP74 and also RCSlice discussed in this forum 
need to work properly with const and immutable. Therefore, they need a 
way to reliably define and access metadata for a data structure.


One possible solution is to add a @mutable or @metadata attribute 
similar to C++'s keyword mutable. Walter and I both dislike that 
solution because it's hamfisted and leaves too much opportunity for 
abuse - people can essentially create unbounded amounts of mutable 
payload for an object claimed to be immutable. That makes it impossible 
(or unsafe) to optimize code based on algebraic assumptions.


We have a few candidates for solutions, but wanted to open with a good 
discussion first. So, how do you envision a way to define and access 
mutable metadata for objects (including immutable ones)?


Store the metadata in a global hash table.

There's a problem with reference counting immutable objects: they are 
implicitly shared. Any metadata attached to them thus needs to be 
shared. Accessing the metadata through a global shared hash table isn't 
going to be that much of a performance hit compared to whatever 
mechanism is used to synchronize access to that data.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: RCArray is unsafe

2015-03-01 Thread Michel Fortin via Digitalmars-d

On 2015-03-01 19:21:57 +, Walter Bright said:

The trouble seems to happen when there are two references to the same 
object passed to a function. I.e. there can be only one borrowed ref 
at a time.


I'm thinking this could be statically disallowed in @safe code.


That's actually not enough. You'll have to block access to global 
variables too:


S s;

void main() {
s.array = RCArray!T([T()]);   // s.array's refcount is now 1
foo(s.array[0]);   // pass by ref
}
void foo(ref T t) {
s.array = RCArray!T([]);  // drop the old s.array
t.doSomething();  // oops, t is gone
}


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Improving DIP74: functions borrow by default, retain only if needed

2015-02-27 Thread Michel Fortin via Digitalmars-d

On 2015-02-27 23:11:55 +, deadalnix said:


On Friday, 27 February 2015 at 23:06:26 UTC, Andrei Alexandrescu wrote:
OK, so at least in theory autorelease pools are not necessary for 
getting ARC to work? -- Andrei


ARC need them, this is part of the spec. You can have good RC without them IMO.


Apple's ARC needs autorelease pools to interact with Objective-C code. 
But if by ARC you just mean what the acronym stands for -- automatic 
reference counting -- there's no need for autorelease pools to 
implement ARC.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Improving DIP74: functions borrow by default, retain only if needed

2015-02-27 Thread Michel Fortin via Digitalmars-d

On 2015-02-27 20:34:08 +, Steven Schveighoffer said:


On 2/27/15 3:30 PM, Steven Schveighoffer wrote:


void main()
{
C c = new C; // ref counted class
C2 c2 = new C2; // another ref counted class
c2.c = c;
foo(c, c2);
}


Bleh, that was dumb.

void main()
{
C2 c2 = new C2;
c2.c = new C;
foo(c2.c, c2);
}

Still same question. The issue here is how do you know that the 
reference that you are sure is keeping the thing alive is not going to 
release it through some back door.


You have to retain 'c' for the duration of the call unless you can 
prove somehow that calling the function will not cause it to be 
released. You can prove it in certain situations:


- you are passing a local variable as a parameter and nobody has taken 
a mutable reference (or pointer) to that variable, or to the stack 
frame (be wary of nested functions accessing the stack frame)
- you are passing a global variable as a parameter to a pure function 
and aren't giving to that pure function a mutable reference to that 
variable.
- you are passing a member variable as a parameter to a pure function 
and aren't giving to that pure function a mutable reference to that 
variable or its class.


There are surely other cases, but you get the idea. These three 
situations are probably the most common, especially the first one. For 
instance, inside a member function, 'this' is a local variable and you 
will never pass it to another function by ref, so it's safe to call 
'this.otherFunction()' without retaining 'this' first.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Improving DIP74: functions borrow by default, retain only if needed

2015-02-27 Thread Michel Fortin via Digitalmars-d

On 2015-02-27 21:33:51 +, Steven Schveighoffer said:

I believe autorelease pools are not needed for ARC, but are maintained 
because much Objective-C code contains MRC, and that protocol needs to 
be supported.


Exactly.

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: A Refcounted Array Type

2015-02-26 Thread Michel Fortin via Digitalmars-d

On 2015-02-26 21:07:26 +, Andrei Alexandrescu said:


On 2/26/15 12:54 PM, deadalnix wrote:

On Thursday, 26 February 2015 at 16:57:51 UTC, Andrei Alexandrescu wrote:

On 2/26/15 8:51 AM, Steven Schveighoffer wrote:

As talked about before, running dtors in the originating thread can
solve this problem.


Yah, that will solve the nonatomic reference counting. What do you
think about http://forum.dlang.org/thread/mcllre$1abs$1...@digitalmars.com?

Andrei


class BazingaException : Exception {

RefCount!Stuff reallyImportantStuff;

// ...

}

void main() {
auto t = new Thread({
RefCount!Stuff s = ...;
throw new BazingaException(s);
});

t.start();
t.join();
}


Could you please walk me through what the matter is here. Thanks. -- Andrei


The exception is thrown by t.join() in another thread, after the 
originating thread died. Thus, obviously, it cannot be destructed in 
the originating thread as stated above. But everyone already know that.


But the example doesn't make one problem quite clear: Unless the GC 
destroys the exception in the thread join() was called, there can be a 
race. That's because join() moves the exception to another thread, and 
the thread that now owns the exception could make copies of that 
reallyImportantStuff and access the counter beyond the exception's 
lifetime. So it turns out that the GC heap needs call the exception's 
destructor in the thread calling join() to avoid races.


Additionally, if the old thread has leftover objects still not yet 
collected, they'll need to be destroyed in the thread calling join() 
too. Otherwise you might get races when the exception is destroyed.


So you could solve all that by changing of ownership for things 
originating from the worker thread to the thread that is calling 
join(). Or if no one calls join(), then you can destroy objects 
originating from the dead thread in any thread, as long as they are all 
destroyed *in the same thread* (because objects originating from the 
same thread might all points to the same thread-local reference counts).


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: A Refcounted Array Type

2015-02-24 Thread Michel Fortin via Digitalmars-d

On 2015-02-23 22:15:46 +, Walter Bright said:


int* count;

[...] if (count  --*count == 0) [...]


Careful!

This isn't memory safe and you have to thank the GC for it. If you ever 
use RCArray as a member variable in a class, the RCArray destructor is 
going to be called from a random thread when the class destructor is 
run. If some thread has a stack reference to the array you have a race.


You have to use an atomic counter unless you can prove the RCArray 
struct will never be put in a GC-managed context. It is rather sad that 
the language has no way to enforce such a restriction, and also that 
@safe cannot detect that this is a problem here.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: D/Objective-C 64bit

2014-11-20 Thread Michel Fortin via Digitalmars-d-announce

On 2014-11-18 09:07:10 +, Christian Schneider said:


This is what I came up with so far:

override KeyboardView initWithFrame(NSRect frame) [initWithFrame:] {
 //my stuff
 return cast(KeyboardView) super.initWithFrame(frame) ;
}


Why not use a constructor and let the compiler manage the boilerplate?

this(NSRect frame) [initWithFrame:] {
//my stuff
super(frame);
}

This should emit the same code as the function above (but I haven't 
tested). And then you can write:


auto view = new KeyboardView(someFrame);

and have proper type safety.

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: D/Objective-C 64bit

2014-11-04 Thread Michel Fortin via Digitalmars-d-announce
On 2014-11-04 09:07:08 +, Christian Schneider 
schnei...@gerzonic.net said:



Ok, some more info:

I changed the mapping in tableview.d to:

void setDoubleAction(void __selector(ObjcObject))
[setDoubleAction:] ;


That's indeed the best way to do it.



This should be the way to do it. Now in the implementation of the
action:

  void doubleClickAction(ObjcObject sender) {
  NSLog(double click action) ;
  NSLog(the sender: %@, sender) ;
  }

This works fine and prints the log:  2014-11-04 10:01:57.967
tableview[9988:507] the sender: NSTableView: 0x7f8309f156b0

But now I would like to do something with this sender, like I do
often in an Objective-C project:

NSTableView tableView = cast(NSTableView)sender ;

I get a  EXC_BAD_ACCESS (SIGSEGV) on this line. Only when I
replace both ObjcObject with NSObject (in tableview.d, the
mapping, and the target action) this cast works. I might be
missing something obvious here.


There is no test for interface-to-class casts in the D/Objective-C test 
suite, which means you're likely the first person to try that. It's 
probably just an oversight in the compiler code.



--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: D/Objective-C 64bit

2014-11-04 Thread Michel Fortin via Digitalmars-d-announce

On 2014-11-01 10:47:54 +, Jacob Carlborg d...@me.com said:


On 2014-11-01 01:58, Michel Fortin wrote:


That said, there are other parts of D/Objective-C that could pose
difficulties to existing languages tools, some syntactic (__selector, or
this.class to get the metaclass)


this.class could perhaps be called this.classof, at least that's 
valid syntax. Although I don't know what to do about __selector.


Once this is merged in DMD, __selector could be documented to be 
syntactically identical to a delegate (although semantically different) 
and it could be made syntactically valid for regular D code compiled 
with no Objective-C support (although it'd remain semantically 
invalid). That'd allow you to hide it in version blocks and static ifs.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: D/Objective-C 64bit

2014-10-31 Thread Michel Fortin via Digitalmars-d-announce

On 2014-10-30 07:13:08 +, Jacob Carlborg d...@me.com said:

I had a look at your fix. I see that you added a call to release in 
the destructor. Just for the record, there's no guarantee that the 
destructor of a GC allocated object gets run, at all.


D/Objective-C never allocates Objective-C objects on the GC heap. If an 
object derives from NSObject it is allocated using the usual 
Objective-C alloc class method when you use the new keyword.



Or, if this class get instantiated by some Objective-C framework then 
it will know nothing about the destructor in D. I guess the right 
solution is to override dealloc.


Whoever instantiated the object has no bearing on who will deallocate 
and call its destructor. When you call release() and the counter falls 
to zero, dealloc is called and the memory is then released. Whether 
that call to release() was made from D or Objective-C code is 
irrelevant.



Hmm, I'm wondering if it's a good idea to lower a destructor to 
dealloc, just like a constructor is lowered to init.


I can't remember if this is an oversight or just something that I 
hadn't got to yet. In my mind this was already done.


Anyway, the answer is *yes*: the destructor should be mapped to the 
dealloc selector. It should also implicitly call the destructor for 
any struct within the object and call it's superclass's destructor (if 
present), the usual D semantics for a destructor (that part might 
already work actually).


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: D/Objective-C 64bit

2014-10-31 Thread Michel Fortin via Digitalmars-d-announce

On 2014-10-30 09:16:34 +, Martin Nowak c...@dawg.eu said:


On Tuesday, 11 March 2014 at 18:23:08 UTC, Jacob Carlborg wrote:
A DIP is available here [1] and the latest implementation is available 
here [2].


[1] http://wiki.dlang.org/DIP43


Instead of adding the selector syntaxsyntax you could reuse pragma mangle.


Nooo! Mangling is something different from the selector name. Mangling 
is the linker symbol for the function. The selector is the name used to 
find the function pointer for a given a dynamic object (with similar 
purpose to a vtable offset). The function has both a mangled name and a 
selector name, and they're always two different names.




Alternatively a compiler recognized UDA would work too.

 @objcSel!(insertItemWithObjectValue, atIndex)
 void insertItem(ObjcObject object, NSInteger value);


Ah, that's better. Except you really should use a single string with 
colons, otherwise you'll have a problem distinguishing no-parameter 
selectors from single-parameter selectors.



Changing the lexer and parser would affect all D language tools 
(editors, formatters, linters, other compilers). So now that we do have 
UDAs I don't see a justification for changing the syntax and grammar of 
D.


Very true. I agree. Now that UDAs exist, it'd be preferable to use them.

That said, there are other parts of D/Objective-C that could pose 
difficulties to existing languages tools, some syntactic (__selector, 
or this.class to get the metaclass), some semantic (overridable 
static methods having a this pointing to the metaclass). I had to 
bend the rules in some places to make all that work. But it's true that 
nothing will be more confusing to those tools than the selector 
declaration currently following a function name.



--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: D mention on developer.apple.com

2014-09-27 Thread Michel Fortin via Digitalmars-d

On 2014-09-27 10:02:59 +, Jacob Carlborg d...@me.com said:


On 2014-09-27 11:05, ponce wrote:


- and no exceptions, just because


The Objective-C frameworks by Apple basically never throw exceptions.


There's that.

Also, remember Walter's fear of ARC in D that would be bloating the 
generated code by implicitly adding exception handlers all over the 
place? Well, Swift won't have to bother about that. I'm not claiming 
that is why there's no exception in Swift, but it's an interesting 
conjecture.


The Cocoa error handling pattern is rather ugly to use in Swift at the 
moment. I hypothesize they'll come up with something later on the error 
handling front.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



D mention on developer.apple.com

2014-09-25 Thread Michel Fortin via Digitalmars-d
Maybe this will be of interest to someone. D was mentioned on the 
official Swift Blog today:


Swift borrows a clever feature from the D language: these identifiers 
expand to the location of the caller when evaluated in a default 
argument list.


-- Building assert() in Swift, Part 2: __FILE__ and __LINE__
https://developer.apple.com/swift/blog/?id=15

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: More radical ideas about gc and reference counting

2014-05-11 Thread Michel Fortin via Digitalmars-d

On 2014-05-11 08:29:13 +, Walter Bright newshou...@digitalmars.com said:

Again, O-C and C++/CX ARC are not memory safe because in order to make 
it perform they provide unsafe escapes from it.


But D could provide memory-safe escapes. If we keep the current GC to 
collect cycles, we could also allow raw pointers managed by the GC 
alongside ARC.


Let's say we have two kinds of pointers: rc+gc pointers (the default) 
and gc_only pointers (on demand). When assigning from a rc+gc pointer 
to a gc_only pointer, the compiler emits code that disables destruction 
via the reference counting. This makes the GC solely responsible for 
destructing and deallocating that memory block. You can still assign 
the pointer to a rc+gc pointer later on, but the reference count is no 
longer reliable which is why RC-based destruction has been disabled.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: More radical ideas about gc and reference counting

2014-05-11 Thread Michel Fortin via Digitalmars-d

On 2014-05-11 19:37:30 +, Walter Bright newshou...@digitalmars.com said:


On 5/11/2014 5:52 AM, Michel Fortin wrote:

On 2014-05-11 08:29:13 +, Walter Bright newshou...@digitalmars.com said:


Again, O-C and C++/CX ARC are not memory safe because in order to make it
perform they provide unsafe escapes from it.


But D could provide memory-safe escapes. If we keep the current GC to collect
cycles, we could also allow raw pointers managed by the GC alongside ARC.

Let's say we have two kinds of pointers: rc+gc pointers (the default) and
gc_only pointers (on demand). When assigning from a rc+gc pointer to a gc_only
pointer, the compiler emits code that disables destruction via the reference
counting. This makes the GC solely responsible for destructing and deallocating
that memory block. You can still assign the pointer to a rc+gc pointer 
later on,

but the reference count is no longer reliable which is why RC-based destruction
has been disabled.


Yes, you can make it memory safe by introducing another pointer type, 
as Rust does. But see my comments about this scheme in the message you 
replied to.


(Yes, I understand your proposal is a variation on that scheme.)


It is a variation on that scheme, but with one significant difference: 
those two pointer types can mix and there's no restriction on 
assignments from one type to the other. There's therefore no transitive 
effect and no complicated ownership rule to understand.


This obviously does not address all your concerns with ARC (which I'll 
admit most are valid), but this ARC isn't memory-safe argument has to 
stop. It does not make sense. One doesn't need to sacrifice memory 
safety to use ARC, neither is that sacrifice necessary for having 
islands of non-ARC code. That's what I was trying to point out in my 
previous post.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: More radical ideas about gc and reference counting

2014-05-11 Thread Michel Fortin via Digitalmars-d

On 2014-05-11 21:41:10 +, Walter Bright newshou...@digitalmars.com said:

Your proposal still relies on a GC to provide the memory safety, and 
has no inherent protection against GC pauses. Your idea has a lot of 
merit, but it is a hybrid ARC/GC system.


If you thread carefully you can disable GC collections at runtime and 
not suffer GC pauses. If you have no cycles and no gc_only pointers, 
then you won't run out of memory.


I think if this thread has proven something, it's that people need to 
be able to choose their memory management policy when the default is 
unsatisfactory. I'm trying to find a way to do that, a way to disable 
one side or the other if it is poisonous to your particular 
application. It is a hybrid system I'm suggesting, no doubt. It'd also 
be an interesting experiment, if someone wants to take it.



As long as C++/CX and O-C are brought out here as proven, successful 
examples for D to emulate here, and there is no acknowledgement that 
they are not remotely memory safe, I need to continue to point this out.


You should not say that ARC is not safe then, you should say instead 
that ARC in those languages has to be supplemented with unsafe code to 
be fast enough. That statement I can agree with. Taking the shortcut 
saying simply ARC is unsafe is misleading.



--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: From slices to perfect imitators: opByValue

2014-05-08 Thread Michel Fortin via Digitalmars-d
On 2014-05-08 03:58:21 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:


So there's this recent discussion about making T[] be refcounted if and 
only if T has a destructor.


That's an interesting idea. More generally, there's the notion that 
making user-defined types as powerful as built-in types is a Good 
Thing(tm).


...

This magic of T[] is something that custom ranges can't avail 
themselves of. In order to bring about parity, we'd need to introduce 
opByValue which (if present) would be automatically called whenever the 
object is passed by value into a function.


Will this solve the problem that const(MyRange!(const T)) is a 
different type from const(MyRange!(T))? I doubt it. But they should be 
the same type if we want to follow the semantics of the language's 
slices, where const(const(T)[]) is the same as const(T[]).


Perhaps this is an orthogonal issue, but I wonder whether a solution to 
the above problem could make opByValue unnecessary.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: More radical ideas about gc and reference counting

2014-05-06 Thread Michel Fortin via Digitalmars-d
On 2014-05-06 12:04:55 +, Manu via Digitalmars-d 
digitalmars-d@puremagic.com said:



Notably, I didn't say 'phones'. Although I think they do generally
fall into this category, I think they're drifting away. Since they run
full OS stack's, it's typical to have unknown amounts of free memory
for user-space apps and virtual memory managers that can page swap, so
having excess memory overhead probably isn't such a big deal. It's
still a major performance hazard though. Stuttering realtime
applications is never a professional look, and Android suffers
chronically in this department compared to iOS.


Note that iOS has no page swap. Apps just get killed if there isn't 
enough memory (after being sent a few low memory signals they can react 
on, clearing caches, etc.). Apps that takes a lot of memory cause other 
apps in the background to be killed (and later restarted when they come 
to the foreground).


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: More radical ideas about gc and reference counting

2014-05-04 Thread Michel Fortin via Digitalmars-d

On 2014-05-04 09:00:45 +, Marc Schütz schue...@gmx.net said:


On Saturday, 3 May 2014 at 11:12:56 UTC, Michel Fortin wrote:
Or turn the rule on its head: make it so having a destructor makes the 
heap memory block reference counted. With this adding a destructor 
always cause deterministic destruction.


The compiler knows statically whether a struct has a destructor. For a 
class you need a runtime trick because the root object which can be 
either. Use a virtual call or a magic value in the reference count 
field to handle the reference count management. You also need a way to 
tag a class to be guarantied it has no derived class with a destructor 
(to provide a static proof for the compiler it can omit ARC code), 
perhaps @disable ~this().


Then remains the problem of cycles. It could be a hard error if the 
destructor is @safe (error thrown when the GC collects it). The 
destructor could be allowed to run (in any thread) if the destructor is 
@system or @trusted.


The interesting thing with this is that the current D semantics are 
preserved, destructors become deterministic (except in the presence of 
cycles, which the GC will detect for you), and if you're manipulating 
pointers to pure memory (memory blocks having no destructor) there's no 
ARC overhead. And finally, no new pointer attributes; Walter will like 
this last one.


This is certainly also an interesting idea, but I suspect it is bound 
to fail, simply because it involves ARC. Reference counting always 
makes things so much more complicated... See for example the cycles 
problem you mentioned: If you need a GC for that, you cannot guarantee 
that the objects will be collected, which was the reason to introduce 
ARC in the first place. Then there are the problems with shared vs. 
thread-local RC (including casting between the two), and arrays/slices 
of RC objects. And, of course, Walter doesn't like it ;-)


Arrays/slices of RC objects would have to be reference counted too. 
That was part of Andrei's proposal.


Cycles break deterministic destruction and break the guaranty the 
destructor will be called because we're relying on the GC to free them, 
true. But today we have no guaranty at all, even when there's no cycle. 
It is possible to avoid cycles, and not that hard to do in a safe way 
when you have weak (auto-nulling) pointers (which D should have 
regardless of the memory management scheme). It's even easier if you 
use the GC to detect them. I'm proposing that GC calls to a @safe 
destructor be a hard error so you can use that error to fix your 
cycles. (Note that cycles with no destructor are perfectly fine though.)


As for shared vs. thread-local RC, Andrei proposed a solution for that earlier.
http://forum.dlang.org/post/ljrr8l$2dt1$1...@digitalmars.com

I know Walter doesn't like to have pointers variables do magic stuff. 
This is why I'm confining ARC behaviour to objects, structs, and arrays 
having destructors. Andrei's initial proposal was to completely remove 
destructors for objects; all the pointers that would have survived that 
proposal are still RC-free in mine except for non-final classes when 
you haven't explicitly disabled the destructor. (An alternative to 
disabling the destructor would be to use a different root object for 
RC, I just find that less desirable.)


Frankly, if we want destructors to work safely when on the heap, I 
don't see another way.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: More radical ideas about gc and reference counting

2014-05-03 Thread Michel Fortin via Digitalmars-d

On 2014-05-01 17:35:36 +, Marc Schütz schue...@gmx.net said:

Maybe the language should have some way to distinguish between 
GC-managed and manually-managed objects, preferably in the type system. 
Then it could be statically checked whether an object is supposed to be 
GC-managed, and consequentially shouldn't have a destructor.


Or turn the rule on its head: make it so having a destructor makes the 
heap memory block reference counted. With this adding a destructor 
always cause deterministic destruction.


The compiler knows statically whether a struct has a destructor. For a 
class you need a runtime trick because the root object which can be 
either. Use a virtual call or a magic value in the reference count 
field to handle the reference count management. You also need a way to 
tag a class to be guarantied it has no derived class with a destructor 
(to provide a static proof for the compiler it can omit ARC code), 
perhaps @disable ~this().


Then remains the problem of cycles. It could be a hard error if the 
destructor is @safe (error thrown when the GC collects it). The 
destructor could be allowed to run (in any thread) if the destructor is 
@system or @trusted.


The interesting thing with this is that the current D semantics are 
preserved, destructors become deterministic (except in the presence of 
cycles, which the GC will detect for you), and if you're manipulating 
pointers to pure memory (memory blocks having no destructor) there's no 
ARC overhead. And finally, no new pointer attributes; Walter will like 
this last one.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: GC vs Resource management.

2014-05-03 Thread Michel Fortin via Digitalmars-d
On 2014-05-03 18:27:47 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:


Interesting, we haven't explored that. The most problematic implication 
would be that classes with destructors will form a hierarchy separate 
from Object.


Seems like people have been ignoring my two posts in the thread 
radical ideas about gc and reference counting. I've been proposing 
exactly that, and there's a way if you don't want a separate class 
hierarchy.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: More radical ideas about gc and reference counting

2014-04-30 Thread Michel Fortin via Digitalmars-d
On 2014-04-30 21:51:18 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:


I'm thinking e.g. non-interlocked refcounts go like 1, 3, 5, ... and 
interlocked refcounts go like 2, 4, 6, ...


Then you do an unprotected read of the refcount. If it's odd, then it's 
impossible to having originated as an interlocked one. So proceed with 
simple increment. If it's even, do an interlocked increment.


Nice idea, although I'd add a twist to support polymorphism in class 
hierarchies: add a magic value (say zero) to mean not 
reference-counted. When you instantiate an object that has a 
destructor, the reference counter is set to 1 or 2 depending on whether 
it's shared or not. If however the object has no destructor and is not 
reference counted, set the counter to the magic value.


Then have the compiler assume all objects are reference counted. At 
runtime only those objects with a non-magic value as the reference 
count will actually increment/decrement the counter.


Finally, let the the compiler understand some situations where objects 
are guarantied to have no destructors a destructor so it can omit 
automatic reference counting code in those cases. One such situation is 
when you have a reference to a final class with no destructor. We could 
also add a @nodestructor attribute to forbid a class and all its 
descendants from having destructors.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP61: Add namespaces to D

2014-04-26 Thread Michel Fortin via Digitalmars-d

On 2014-04-26 09:31:51 +, Walter Bright newshou...@digitalmars.com said:


http://wiki.dlang.org/DIP61

Best practices in C++ code increasingly means putting functions and 
declarations in namespaces. Currently, there is no support in D to call 
C++ functions in namespaces. The primary issue is that the name 
mangling doesn't match. Need a simple and straightforward method of 
indicating namespaces.

...
As more and more people are attempting to call C++ libraries from D, 
this is getting to be a more and more important issue.


My opinion is that one shouldn't use namespaces in D.

But I do like this namespace concept. It sends the following message: 
you're welcome to use a namespace if you like -- it'll work -- but 99% 
of the time it'll only be some decoration in your source code that 
users of your API can ignore at will because everything is still 
available at module scope. (The 1% is when there is a name clash.) I 
think it's a practical thing to do to avoid fake namespace substitutes 
in D (look at std.datetime.Clock), even if it's a little dirty.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP61: Add namespaces to D

2014-04-26 Thread Michel Fortin via Digitalmars-d

On 2014-04-26 19:13:52 +, Walter Bright newshou...@digitalmars.com said:

I think that trying to be compatible with C++ templates is utter 
madness. But we can handle namespaces.


I'd argue that templates aren't the difficult part. Having struct/class 
semantics ABI-compatible with C++ is the hard part (constructors, 
destructors, exceptions). Once you have that, the difference between 
vector_of_int and vectorint just becomes mangling. Same thing for 
template functions.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-23 Thread Michel Fortin via Digitalmars-d
On 2014-04-23 09:50:57 +, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com said:



On Tuesday, 22 April 2014 at 19:42:20 UTC, Michel Fortin wrote:
Objective-C isn't memory safe because it lets you play with raw 
pointers too. If you limit yourself to ARC-managed pointers (and avoid 
undefined behaviours inherited from C) everything is perfectly memory 
safe.


I'm not convinced that it is safe in multi-threaded mode. How does ARC 
deal with parallell reads and writes from two different threads? IIRC 
the most common implementations deals with read/read and write/write, 
but read/write is too costly?


The answer is that in the general case you should protect reads and 
writes to an ARC pointer with locks. Otherwise the counter risk being 
getting out of sync and later you'll get corruption somewhere.


There are atomic properties which are safe to read and write from 
multiple threads. Internally they use the @synchronized keyword on the 
object.


But since there's no 'shared' attribute in Objective-C, you can't go 
very far if you wanted the compiler to check things for memory safety. 
That said, if you assume a correct implementation of the NSCopying 
protocol (deep copying), objects following that protocol would be safe 
to pass through a std.concurrency-like interface.


In all honesty, I'm not that impressed with the multithreading 
protections in D either. It seems you so often have to bypass the type 
system to make something useful that it doesn't appear very different 
from not having them. And don't get me started with synchronized 
classes...


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-23 Thread Michel Fortin via Digitalmars-d

On 2014-04-23 04:33:00 +, Walter Bright newshou...@digitalmars.com said:


On 4/22/2014 12:42 PM, Michel Fortin wrote:

On 2014-04-22 19:02:05 +, Walter Bright newshou...@digitalmars.com said:


Memory safety is not a strawman. It's a critical feature for a modern
language, and will become ever more important.


What you don't seem to get is that ARC, by itself, is memory-safe.


I repeatedly said that it is not memory safe because you must employ 
escapes from it to get performance.


It wasn't that clear to me you were saying that, but now it makes sense.

In Objective-C, the performance-sensitive parts are going to be 
implemented in C, that's true. But that's rarely going to be more than 
5% of your code, and probably only a few isolated parts where you're 
using preallocated memory blocks retained by ARC while you're playing 
with the content.


If you're writing something that can't tolerate a GC pause, then it 
makes perfect sense to make this performance-critical code unsafe so 
you can write the remaining 95% of your app in a memory-safe 
environment with no GC pause.


D on the other hand forces you to have those GC pauses or have no 
memory management at all. It's a different tradeoff and it isn't 
suitable everywhere, but I acknowledge it makes it easier to make 
performance-sensitive code @safe, something that'd be a shame to lose.



Objective-C isn't memory safe because it lets you play with raw 
pointers too. If

you limit yourself to ARC-managed pointers (and avoid undefined behaviours
inherited from C) everything is perfectly memory safe.


Allow me to make it clear that IF you never convert an ARC reference to 
a raw pointer in userland, I agree that it is memory safe. But this is 
not practical for high performance code.


Framing the problem this way makes it easier to find a solution.

I wonder, would it be acceptable if ARC was used everywhere by default 
but could easily be disabled inside performance-sensitive code by 
allowing the user choose between safe GC-based memory management or 
unsafe manual memory management? I have an idea that'd permit just 
that. Perhaps I should write a DIP about it.




I'm pretty confident that had I continued my work on D/Objective-C we'd now be
able to interact with Objective-C objects using ARC in @safe code. I was
planning for that. Objective-C actually isn't very far from memory safety now
that it has ARC, it just lacks the @safe attribute to enable compiler 
verification.


I wish you would continue that work!


I wish I had the time too.

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-22 Thread Michel Fortin via Digitalmars-d
On 2014-04-22 17:52:27 +, Steven Schveighoffer 
schvei...@yahoo.com said:



Even doing it the way they have seems unnecessarily complex, given that
iOS 64-bit was brand new.


Perhaps it's faster that way due to some caching effect. Or perhaps 
it's to be able to have static constant string objects in the readonly 
segments.


Apple could always change their mind and add another field for the 
reference count. The Modern runtime has non-fragile classes, so you can 
change the base class layout breaking ABI compatibility.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-22 Thread Michel Fortin via Digitalmars-d

On 2014-04-22 19:02:05 +, Walter Bright newshou...@digitalmars.com said:

Memory safety is not a strawman. It's a critical feature for a modern 
language, and will become ever more important.


What you don't seem to get is that ARC, by itself, is memory-safe.

Objective-C isn't memory safe because it lets you play with raw 
pointers too. If you limit yourself to ARC-managed pointers (and avoid 
undefined behaviours inherited from C) everything is perfectly memory 
safe.


I'm pretty confident that had I continued my work on D/Objective-C we'd 
now be able to interact with Objective-C objects using ARC in @safe 
code. I was planning for that. Objective-C actually isn't very far from 
memory safety now that it has ARC, it just lacks the @safe attribute to 
enable compiler verification.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-19 Thread Michel Fortin via Digitalmars-d

On 2014-04-18 23:48:43 +, Walter Bright newshou...@digitalmars.com said:


On 4/18/2014 3:02 PM, Michel Fortin wrote:

Objective-C enables ARC by default for all pointers to Objective-C objects.
Since virtually all Objective-C APIs deal with Objective-C objects (or integral
values), if you limit yourself to Objective-C APIs you're pretty much 
memory-safe.


pretty much isn't really what we're trying to achieve with @safe.


A lot of D code is memory safe too, but not all. Is D memory-safe? Yes, 
if you limit yourself to the @safe subset (and avoid the few holes 
remaining in it). Same thing for Objective-C: there exists a subset of 
the language that is memory safe, and pretty much everyone limit itself 
to that subset already, unless there's a reason to go lower-level and 
use C.


In other words, unmanaged pointers are the assembler of Objective-C. 
It's unsafe and error prone, but it lets you optimize things when the 
need arise.




The point being, D could have managed and unmanaged pointers (like Objective-C
with ARC has), make managed pointers the default, and let people escape pointer
management if they want to inside @system/@trusted functions.


Yeah, it could, and the design of D has tried really hard to avoid 
such. Managed C++ was a colossal failure.


I've dealt with systems with multiple pointer types before (16 bit X86) 
and I was really, really happy to leave that  behind.


Yet, there's C++ and its proliferation of library-implemented managed 
pointer types (shared_ptr, unique_ptr, weak_ptr, scoped_ptr, and 
various equivalents in libraries). Whether they're a success or a patch 
for shortcomings in the language, they're used everywhere despite the 
various mutually incompatible forms and being all leaky and arcane to 
use.


And if I'm not mistaken, this is where the @nogc subset of D is headed. 
Already, and with good reason, people are suggesting using library 
managed pointers (such as RefCounted) as a substitute to raw pointers 
in @nogc code. That doesn't automatically make @nogc a failure — C++ is 
a success after all — but it shows that you can't live in a modern 
world without managed pointers. If multiple pointer types really dooms 
a language (your theory) then the @nogc D subset is doomed too.


Yet, ARC-managed pointers are a huge success in Objective-C. I think 
the trick is to not bother people with various pointer types in regular 
code. Just make sure the default pointer type works everywhere in 
higher-level code, and then provide clear ways to escape that 
management and work at a lower level when you need to optimize a 
function or interface with external C code.


D thrives with raw pointers only because its GC implementation happens 
to manage raw pointers. That's a brillant idea that makes things 
simpler, but this also compromises performance at other levels. I don't 
think there is a way out of that performance issue keeping raw pointers 
the default, even though I'd like to be proven wrong.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-18 Thread Michel Fortin via Digitalmars-d

On 2014-04-18 20:40:06 +, Walter Bright newshou...@digitalmars.com said:


O-C doesn't use ARC for all pointers, nor is it memory safe.


@safe would be very easy to implement in Objective-C now that ARC is there.

This has got me thinking. Ever heard C is the new assembly? I think 
this describes very well the relation between C and Objective-C in most 
Objective-C programs today.


Objective-C enables ARC by default for all pointers to Objective-C 
objects. Since virtually all Objective-C APIs deal with Objective-C 
objects (or integral values), if you limit yourself to Objective-C APIs 
you're pretty much memory-safe.


When most people write Objective-C programs, they use exclusively 
Objective-C APIs (that deal with Objective-C objects and integrals, 
thus memory-safe), except for the few places where performance is 
important (tight loops, specialized data structures) or where 
Objective-C APIs are not available.


You can mix and match C and Objective-C code, so no clear boundary 
separates the two, but that doesn't mean there couldn't be one. Adding 
a @safe function attribute to Objective-C that'd prevent you from 
touching a non-managed pointer is clearly something I'd like to see in 
Objective-C. Most Objective-C code I know could already be labeled 
@safe with no change. Only a small fraction would have to be updated or 
left unsafe.


Silly me, here I am discussing a improvement proposal for Objective-C 
in a D forum!


The point being, D could have managed and unmanaged pointers (like 
Objective-C with ARC has), make managed pointers the default, and let 
people escape pointer management if they want to inside 
@system/@trusted functions. One way it could be done is by tagging 
specific pointers with some attribute to make them explicitly not 
managed (what __unsafe_unretained is for in Objective-C). Perhaps the 
whole function could be tagged too. But you won't need this in general, 
only when optimizing a tight loop or something similar where 
performance really counts.


Whether that's the path D should take, I don't know.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-17 Thread Michel Fortin via Digitalmars-d
On 2014-04-17 03:13:48 +, Manu via Digitalmars-d 
digitalmars-d@puremagic.com said:



Obviously, a critical part of ARC is the compilers ability to reduce
redundant inc/dec sequences. At which point your 'every time' assertion is
false. C++ can't do ARC, so it's not comparable.
With proper elimination, transferring ownership results in no cost, only
duplication/destruction, and those are moments where I've deliberately
committed to creation/destruction of an instance of something, at which
point I'm happy to pay for an inc/dec; creation/destruction are rarely
high-frequency operations.


You're right that transferring ownership does not cost with ARC. What 
costs you is return values and temporary local variables.


While it's nice to have a compiler that'll elide redundant 
retain/release pairs, function boundaries can often makes this 
difficult. Take this first example:


Object globalObject;

Object getObject()
{
return globalObject; // implicit: retain(globalObject)
}

void main()
{
auto object = getObject();
writeln(object);
// implicit: release(object)
}

It might not be obvious, but here the getObject function *has to* 
increment the reference count by one before returning. There's no other 
convention that'll work because another implementation of getObject 
might return a temporary object. Then, at the end of main, 
globalObject's reference counter is decremented. Only if getObject gets 
inlined can the compiler detect the increment/decrement cycle is 
unnecessary.


But wait! If writeln isn't pure (and surely it isn't), then it might 
change the value of globalObject (you never know what's in 
Object.toString, right?), which will in turn release object. So main 
*has to* increment the reference counter if it wants to make sure its 
local variable object is valid until the end of the writeln call. Can't 
elide here.


Let's take this other example:

Object globalObject;
Object otherGlobalObject;

void main()
{
auto object = globalObject; // implicit: retain(globalObject)
foo(object);
// implicit: release(object)
}

Here you can elide the increment/decrement cycle *only if* foo is pure. 
If foo is not pure, then it might set another value to globalObject 
(you never know, right?), which will decrement the reference count and 
leave the object variable in main the sole owner of the object. 
Alternatively, if foo is not pure but instead gets inlined it might be 
provable that it does not touch globalObject, and elision might become 
a possibility.


I think ARC needs to be practical without eliding of redundant calls. 
It's a good optimization, but a difficult one unless everything is 
inlined. Many such elisions that would appear to be safe at first 
glance aren't provably safe for the compiler because of function calls.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-17 Thread Michel Fortin via Digitalmars-d

On 2014-04-17 17:29:02 +, Walter Bright newshou...@digitalmars.com said:


On 4/17/2014 5:34 AM, Manu via Digitalmars-d wrote:

People who care would go to the effort of manually marking weak references.


And that's not compatible with having a guarantee of memory safety.


Auto-nulling weak references are perfectly memory-safe. In Objective-C 
you use the __weak pointer modifier for that. If you don't want it to 
be auto-nulling, use __unsafe_unretained instead to get a raw pointer. 
In general, seeing __unsafe_unretained in the code is a red flag 
however. You'd better know what you're doing.


If you could transpose the concept to D, __weak would be allowed in 
@safe functions while __unsafe_unretained would not. And thus 
memory-safety is preserved.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-16 Thread Michel Fortin via Digitalmars-d

On 2014-04-16 23:20:07 +, Walter Bright newshou...@digitalmars.com said:


On 4/16/2014 3:42 PM, Adam Wilson wrote:
ARC may in fact be the most advantageous for a specific use case, but 
that in no

way means that all use cases will see a performance improvement, and in all
likelihood, may see a decrease in performance.


Right on. Pervasive ARC is very costly, meaning that one will have to 
define alongside it all kinds of schemes to mitigate those costs, all 
of which are expensive for the programmer to get right.


It's not just ARC. As far as I know, most GC algorithms require some 
action to be taken when changing the value of a pointer. If you're 
seeing this as unnecessary bloat, then there's not much hope in a 
better GC for D either.


But beyond that I wonder if @nogc won't entrench that stance even more. 
Here's the question: is assigning to a pointer allowed in a @nogc 
function? Of course it's allowed! Assigning to a pointer does not 
involve the GC in its current implementation... but what if another GC 
implementation to be used later needs something to be done every time a 
pointer is modified, is this something to be done allowed in a @nogc 
function?


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Heartbleed and static analysis

2014-04-13 Thread Michel Fortin
On 2014-04-13 08:55:53 +, Paolo Invernizzi 
paolo.invernizzi@no.address said:


I don't remember if this has been already posted here in the forum, but 
I think that this rant of Theo de Raadt about heartbleed is _very_ 
interesting.


http://article.gmane.org/gmane.os.openbsd.misc/211963

TBW, I agree with him: it's not a matter of scrutiny or a matter of 
being human, and the post clarify this very well.


Interesting. As far as I know, the D GC is also a wrapper around 
malloc, and it will not return memory using free when an object is 
deallocated. That rant could also apply to D.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: The @safe vs struct destructor dilemma

2014-04-12 Thread Michel Fortin

On 2014-04-11 19:54:16 +, Michel Fortin michel.for...@michelf.ca said:

Can destructors be @safe at all? When called from the GC the destructor 
1) likely runs in a different thread and 2) can potentially access 
other destructed objects, those objects might contain pointers to 
deallocated memory if their destructor manually freed a memory block.


There's another issue I forgot to mention earlier: the destructor could 
leak the pointer to an external variable. Then you'll have a reference 
to a deallocated memory block.


Note that making the destructor pure will only helps for the global 
variable case. The struct/class itself could contain a pointer to a 
global or to another memory block that'll persist beyond the 
destruction of the object and assign the pointer there. It can thus 
leak the deallocating object (or even this if it's a class) through 
that pointer.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: The @safe vs struct destructor dilemma

2014-04-12 Thread Michel Fortin

On 2014-04-12 10:29:50 +, Kagamin s...@here.lot said:


On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
2- after the destructor is run on an object, wipe out the memory block 
with zeros. This way if another to-be-destructed object has a pointer 
to it, at worse it'll dereference a null pointer. With this you might 
get a sporadic crash when it happens, but that's better than memory 
corruption.


Other objects will have a valid pointer to zeroed out block and will be 
able to call its methods. They are likely to crash, but it's not 
guaranteed, they may just fine corrupt memory. Imagine the class has a 
pointer to a memory block of 10MB size, the size is an enum and is 
encoded in the function code (won't be zeroed), the function may write 
to any region of that block of memory pointed to by null after the 
clearing.


Well, that's a general problem of @safe when dereferencing any 
potentially null pointer. I think Walter's solution was to insert a 
runtime check if the offset is going to be beyond a certain size. But 
there has been discussions on non-nullable pointers since then, and I'm 
not sure what Walter thought about them.


The runtime check would help in this case, but not non-nullable pointers.

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: The @safe vs struct destructor dilemma

2014-04-12 Thread Michel Fortin

On 2014-04-12 08:50:59 +, Marc Schütz schue...@gmx.net said:

More correctly, every reference to the destroyed object needs to be 
wiped, not the object itself. But this requires a fully precise GC.


That'd be more costly (assuming you could do it) than just wiping the 
object you just destroyed. But it'd solve the issue of leaking a 
reference to the outside world from the destructor.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: The @safe vs struct destructor dilemma

2014-04-12 Thread Michel Fortin

On 2014-04-12 09:01:12 +, deadalnix deadal...@gmail.com said:


On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
2- after the destructor is run on an object, wipe out the memory block 
with zeros. This way if another to-be-destructed object has a pointer 
to it, at worse it'll dereference a null pointer. With this you might 
get a sporadic crash when it happens, but that's better than memory 
corruption. You only need to do this when allocated on the GC heap, and 
only pointers need to be zeroed, and only if another object being 
destroyed is still pointing to this object, and perhaps only do it for 
@safe destructors.


You don't get a crash, you get undefined behavior. That is much worse 
and certainly not @safe.


You get a null dereference. Because the GC will not free memory for 
objects in a given collection cycle until they're all destroyed, any 
reference to them will still be valid while the other object is being 
destroyed. In other word, if one of them was destroyed and it contained 
a pointer it'll be null. That null dereference is going to be like any 
other potential null dereference in @safe code: it is expected to crash.


There's still the problem of leaking a reference somewhere where it 
survives beyond the current collection cycle. My proposed solution 
doesn't work for that. :-(


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: The @safe vs struct destructor dilemma

2014-04-11 Thread Michel Fortin
On 2014-04-11 06:29:32 +, Nick Sabalausky 
seewebsitetocontac...@semitwist.com said:


So the idea behind @safe is most code should be @safe, with occasional 
@system/@trusted pieces isolated deeper in the call chain. That 
inevitably means occasionally invoking @system from @safe via an 
@trusted intermediary.


Realistically, I would imagine this @trusted part should *always* be a 
dummy wrapper over a specific @system function. Why? Because @trusted 
disables ALL of @safe's extra safety checks. Therefore, restricting 
usage of @trusted to ONLY be dummy wrappers over the specific parts 
which MUST be  @system will minimize the amount of collateral code that 
must loose all of @safe's special safety checks.


This means some mildly-annoying boilerplate at all the @safe - @system 
seams, but it's doable...*EXCEPT*, afaics, for struct destructors. 
Maybe I'm missing something, but I'm not aware of any reasonable way to 
stuff those behind an @trusted wrapper (or even an ugly way, for that 
matter).


If there really *isn't* a reasonable way to wrap @system struct 
destructors (ex: RefCounted) inside an @trusted wall, then any such 
structs will poison all functions which touch them into being @trusted, 
thus destroying the @safe safety checks for the *entire* body of such 
functions. Well, that is, aside from any portions of the function which 
don't touch the struct *and* can be factored out into separate @safe 
helper functions - but that solution seems both limited and 
contortion-prone.


Any thoughts?


Can destructors be @safe at all? When called from the GC the destructor 
1) likely runs in a different thread and 2) can potentially access 
other destructed objects, those objects might contain pointers to 
deallocated memory if their destructor manually freed a memory block.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: The @safe vs struct destructor dilemma

2014-04-11 Thread Michel Fortin
On 2014-04-11 22:22:18 +, Nick Sabalausky 
seewebsitetocontac...@semitwist.com said:



On 4/11/2014 3:54 PM, Michel Fortin wrote:


Can destructors be @safe at all? When called from the GC the destructor
1) likely runs in a different thread and 2) can potentially access other
destructed objects, those objects might contain pointers to deallocated
memory if their destructor manually freed a memory block.


If destructors can't be @safe, that would seem to create a fairly 
sizable hole in the utility of @safe.


Well, they are safe as long as they're not called by the GC. I think 
you could make them safe even with the GC by changing things this way:


1- make the GC call the destructor in the same thread the object was 
created in (for non-shared objects), so any access to thread-local 
stuff stays in the right thread, avoiding low-level races.


2- after the destructor is run on an object, wipe out the memory block 
with zeros. This way if another to-be-destructed object has a pointer 
to it, at worse it'll dereference a null pointer. With this you might 
get a sporadic crash when it happens, but that's better than memory 
corruption. You only need to do this when allocated on the GC heap, and 
only pointers need to be zeroed, and only if another object being 
destroyed is still pointing to this object, and perhaps only do it for 
@safe destructors.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Use C++ exception model in D

2014-04-09 Thread Michel Fortin

On 2014-04-09 17:27:55 +, David Nadlinger c...@klickverbot.at said:


On Wednesday, 9 April 2014 at 16:48:54 UTC, Jacob Carlborg wrote:
x86_64 yes, not necessarily only for DMD. I thought if DMD, LDC and GDC 
all used the same exception handling model and the same as C++ it would 
be easier. Especially for implementing support for Objective-C 
exceptions, which is why initially started this thread.


They don't. GDC and LDC use libunwind, whereas DMD uses its own custom 
EH implementation.


With GDC and LDC, you'd just need to add your code to handle 
Objective-C exceptions to the personality functions. libunwind 
exceptions have a type/source language field, and by default most 
implementations either ignore unknown exception types or abort on 
encountering them.


Maybe the right course of action would be to just ignore Objective-C 
exceptions in 64-bit DMD and finish the rest of D/Objective-C. The day 
D/Objective-C is ported to GDC and LDC, it should be much easier to 
make exceptions work there.


The funny thing is that within Cocoa exceptions are usually reserved 
for logic errors (array out of bound, method calls with unknown 
selector, assertion errors, etc.). That's the kind of error we consider 
unrecoverable when they happen in D code. So I think people can manage 
without Objective-C exceptions for some time.


Given what I wrote above, I'll also argue that it's not a wise move to 
support exceptions in D/Objective-C with the legacy runtime even though 
I implemented it. Because the legacy runtime uses setjmp/longjmp for 
exceptions, try blocks in a function that calls extern(Objective-C) 
functions are costly. And the compiler has to implicitly add those 
costly try blocks to wrap/unwrap exception objects to prevent unwinding 
of the wrong kind from leaving the current function. It was fun to 
implement, but it's also the most intrusive changes D/Objective-C makes 
to the frontend, and a big parts of the additions to druntime.


If we plan to support mixed unwind mechanisms some day it might be 
useful to keep, because the logic for bridging between two or more 
unwinding systems is all there. Otherwise I'd probably scrap the whole 
exception wrapping/unwrapping thing. I doubt the performance cost is 
worth it, and I doubt the maintenance cost for the additional 
complexity is worth it. The legacy Objective-C runtime is mostly gone 
from Apple's ecosystem anyway.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Specifying C++ symbols in C++ namespaces

2014-04-06 Thread Michel Fortin
On 2014-04-06 19:39:31 +, Ola Fosheim Grøstad 
ola.fosheim.grostad+dl...@gmail.com said:


Unfortunately that seems to be years into the future? Although clang 
has begun implementing something:


http://clang.llvm.org/docs/Modules.html

I've got at feeling that if clang gets something working it will become 
a de-facto standard due to demand.


Modules are already in use on OS X for some system frameworks. It can 
result in slightly improved compile times. It has been enabled by 
default for new Xcode projects for some time. With modules enabled, 
clang interpret #includes as module imports for system frameworks 
having a module map. It's so transparent you probably won't notice 
anything has changed.


The only thing that really changes with modules is you don't have 
access to symbols you don't have imported yourself. D already works 
like that. I think D can ignore those modules, just like it ignores 
header files.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Specifying C++ symbols in C++ namespaces

2014-04-05 Thread Michel Fortin

On 2014-04-05 20:47:32 +, Walter Bright newshou...@digitalmars.com said:


On 4/2/2014 3:07 PM, Walter Bright wrote:
One downside of this proposal is that if we ever (perish the thought!) 
attempted

to interface to C++ templates, this design would preclude that.


Yes, this seems to be a fatal flaw. Another design that has evolved 
from these discussions and my discussions with Andrei on it:


 extern (C++, namespace = A.B) { void foo(); void bar(); }
 extern (C++, namespace = C) void foo();

 bar();  // works
 A.B.bar(); // works
 foo(); // error: ambiguous
 C.foo();   // works

 alias C.foo foo;
 foo();  // works, calling C.foo()

I really think the namespace semantics should be attached to the 
extern(C++) rather than be a separate pragma. Having the namespace= 
thing means that namespace isn't a keyword, and provides a general 
mechanism where we can add language specific information as necessary.


I like this idea. But... should this potentially useful thing really be 
restricted to extern C++ things? I've seen at least one attempt to 
create a namespace using what D currently offers [1], and frankly 
something like the above would make much more sense than a class no one 
can instantiate.


[1]: http://dlang.org/phobos/std_datetime.html#.Clock

Here's a suggestion:

@namespace A.B { // can create two levels at once, yeah!
void foo();
void bar();
}
@namespace C {
void foo();
}

Make those C++ declarations, it does not look too foreign anymore:

extern (C++) @namespace A.B {
void foo();
void bar();
}
extern (C++) @namespace C {
void foo();
}

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Specifying C++ symbols in C++ namespaces

2014-04-05 Thread Michel Fortin

On 2014-04-05 20:47:32 +, Walter Bright newshou...@digitalmars.com said:

Yes, this seems to be a fatal flaw. Another design that has evolved 
from these discussions and my discussions with Andrei on it:


 extern (C++, namespace = A.B) { void foo(); void bar(); }
 extern (C++, namespace = C) void foo();

 bar();  // works
 A.B.bar(); // works
 foo(); // error: ambiguous
 C.foo();   // works

 alias C.foo foo;
 foo();  // works, calling C.foo()


What if you also have a C++ foo at global scope?

module cpptest;

extern (C++) void foo();
extern (C++, namespace = A) void foo();

foo(); // ambiguous
A.foo(); // works
.foo(); // works?
cpptest.foo(); // works?

Does these two last lines make sense?


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Specifying C++ symbols in C++ namespaces

2014-04-03 Thread Michel Fortin

On 2014-04-03 03:48:18 +, Walter Bright newshou...@digitalmars.com said:


On 4/2/2014 7:14 PM, Michel Fortin wrote:

That's a contrived example.


Not at all. The whole point of using namespaces in C++ is to introduce 
a scope. And the whole point of scopes is to have the same name in 
different scopes represent different objects.



Perhaps I'm wrong, but I'd assume the general use
case is that all functions in a module will come from the same C++ namespace.


I believe that is an incorrect assumption. C++ namespaces were 
specifically (and wrongly, in my not so humble opinion, but there it 
is) not designed to be closed, nor have any particular relationship 
with modules.



Alternatively you can use another module for the other namespace.


Forcing C++ code that exists in a single file to be split up among 
multiple D files is inflicting unnecessary punishment on the poor guy 
trying to justify migrating to D.


Ok, let's assume that we actually want to reproduce the C++ file 
structure then. Let us have a C++ project, with two files. I'll 
temporarily use the 'namespace' keyword on the D side until we can 
decide on how to best represent a namespace:


 module foo;
 extern (C++):

 namespace S { namespace T {
int foo();
namespace U {
int foo();
}
 } }


 module bar;
 extern (C++):

 namespace S { namespace T {
int bar();
namespace U {
int bar();
}
 } }

Now let's use those:

 module main;
 import foo;
 import bar;

 void main() {
 S.T.foo();
 S.T.U.bar();
 }

But how does the lookup for those functions work? If we use structs or 
templates to represent those namespaces in D then you'll have to 
specify the module name to disambiguate the struct/template itself, and 
the namespace just becomes a nuisance you have to repeat over and over:


 void main() {
 .foo.S.T.foo();
 .bar.S.T.U.bar();
 }

Here I'd argue that having whole-module namespaces in D makes no sense. 
So let's retry by peeling the S.T part of the namespace:


 module foo;
 extern (C++, S.T):

 int foo();
 namespace U {
 int foo();
 }


 module bar;
 extern (C++, S.T):

 int bar();
 namespace U {
 int bar();
 }


 module main;
 import foo;
 import bar;

 void main() {
 foo();
 .bar.U.bar();
 }

Better. Still, if you want C++ namespaces to work nicely, you'll have 
to introduce first class namespace support in D. That means that 
identical namespaces are merged into each other when you import 
modules that contain them. It'd allow you to write this:


 void main() {
 foo();
 U.bar();
 }

Still, I'm not convinced that'd be terribly helpful. Namespaces in D 
would make it easier to declare things 1:1 for sure, but anything that 
depends on Koenig lookup[1] will be broken in D. It could even be 
silently broken as no Koenig lookup means another function not in a 
namespace could be used silently instead of the expected one in a 
namespace (assuming a naive port of some C++ code).


[1]: https://en.wikipedia.org/wiki/Argument-dependent_name_lookup

I'd tend to simply implement extern(C++, namespace.here), which should 
work fine to wrap single-namespace cpp files, and wait to see what are 
the actual friction points before introducing more (people can 
experiment with structs or other modules meanwhile).



--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Specifying C++ symbols in C++ namespaces

2014-04-03 Thread Michel Fortin

On 2014-04-03 19:43:23 +, Walter Bright newshou...@digitalmars.com said:


On 4/3/2014 3:36 AM, Michel Fortin wrote:
I'd tend to simply implement extern(C++, namespace.here), which should 
work fine
to wrap single-namespace cpp files, and wait to see what are the actual 
friction

points before introducing more (people can experiment with structs or other
modules meanwhile).


You have a good point in that to go all the way with namespaces, we'd 
have to implement Koenig lookup and support insertion of names into 
previous namespaces.


I can't see this happening in D.


Me neither.

But I don't see that as much of an argument to not do simple scoping 
with namespace lookup.


What I'm saying is that it should be optional to create a new scope to 
declare a C++ function from a namespace. In other words you need to be 
able to put the function at module scope in D.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Specifying C++ symbols in C++ namespaces

2014-04-02 Thread Michel Fortin

On 2014-04-03 01:09:43 +, Walter Bright newshou...@digitalmars.com said:


I considered that, but it fails because:

C++:

 namespace S { namespace T {
 int foo();
 namespace U {
 int foo();
 }
  } }

D:

   extern (C++, S.T) {
   int foo();
   extern (C++, U) {
 int foo();
   }
   }
   foo();  // error, ambiguous, which one?
   S.T.foo(); // S undefined


That's a contrived example. Perhaps I'm wrong, but I'd assume the 
general use case is that all functions in a module will come from the 
same C++ namespace. For the contrived example above, I think it's fair 
you have to use a contrived solution:


module s.t;
extern (C++, S.T):

int foo();

struct U {
static extern (C++, S.T.U):

int foo();
}

Alternatively you can use another module for the other namespace.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: What would be the consequence of implementing interfaces as fat pointers ?

2014-04-01 Thread Michel Fortin

On 2014-04-01 05:39:04 +, Manu turkey...@gmail.com said:


The point is, it's impossible to bank on pointers being either 32 or 64
bit. This leads to #ifdef's at the top of classes in my experience. D is
not exempt.


Doesn't align(n) work for class members? If it does not, it doesn't 
seem it'd be hard to implement.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: What would be the consequence of implementing interfaces as fat pointers ?

2014-04-01 Thread Michel Fortin

On 2014-04-01 07:11:51 +, Manu turkey...@gmail.com said:


Of course, I use alias this all the time too for various stuff. I said
before, it's a useful tool and it's great D *can* do this stuff, but I'm
talking about this particular super common use case where it's used to hack
together nothing more than a class without a vtable, ie, a basic ref type.
I'd say that's worth serious consideration as a 1st-class concept?


You don't need it as a 1st-class D concept though. Just implement the 
basics of the C++ object model in D, similar to what I did for 
Objective-C, and let people define their own extern(C++) classes with 
no base class. Bonus if it's binary compatible with the equivalent C++ 
class. Hasn't someone done that already?


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: What would be the consequence of implementing interfaces as fat pointers ?

2014-04-01 Thread Michel Fortin

On 2014-04-01 14:17:33 +, Manu turkey...@gmail.com said:


On 1 April 2014 22:03, Michel Fortin michel.for...@michelf.ca wrote:


On 2014-04-01 07:11:51 +, Manu turkey...@gmail.com said:


Of course, I use alias this all the time too for various stuff. I said
before, it's a useful tool and it's great D *can* do this stuff, but I'm

talking about this particular super common use case where it's used to
hack
together nothing more than a class without a vtable, ie, a basic ref type.
I'd say that's worth serious consideration as a 1st-class concept?



You don't need it as a 1st-class D concept though. Just implement the
basics of the C++ object model in D, similar to what I did for Objective-C,
and let people define their own extern(C++) classes with no base class.
Bonus if it's binary compatible with the equivalent C++ class. Hasn't
someone done that already?


I don't think the right conceptual solution to a general ref-type intended
for use throughout D code is to mark it extern C++... That makes no sense.


I was thinking of having classes that'd be semantically equivalent to 
those in D but would follow the C++ ABI, hence the extern(C++). It 
doesn't have to support all of C++, just the parts that intersect with 
what you can express in D. For instance, those classes would be 
reference types, just like D classes; if you need value-type behaviour, 
use a struct.


But maybe that doesn't make sense.

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-24 Thread Michel Fortin

On 2014-03-24 06:08:30 +, safety0ff safety0ff@gmail.com said:


Everything seems to work in your corrected versions:
http://dpaste.dzfl.pl/3b32535559ba

Andrei's version doesn't compile on recent compiler versions due to 
goto skipping initialization.


Ok, so let's check what happens in actual usage. In other words, what 
happens when front is inlined and used in a loop. I'm counting the 
number of assembly lines of this main function for looping through each 
possible input using the various implementations:


http://goo.gl/QjnA2k

implementation   line count of main in assembly
andralex 285 -  9 labels = 276 instructions
MFortin1 135 - 13 labels = 122 instructions
MFortin2 150 - 14 labels = 136 instructions
safety0ff -- not inlined (too big?) --
dnspies  161 - 15 labels = 146 instructions
CWilliams233 - 25 labels = 208 instructions

For compactness, my first implementation seems to be the winner, both 
with and without inlining.


That said, I think front should only assume a non-empty char[] and thus 
should check the length before accessing anything after the first byte 
to protect against incomplete multi-byte sequences such as 
[0x1000_]. So I added this line to my two implementations:


   if (1+tailLength = s.length) return dchar.init;

Now lets see what happens with those changes:

http://goo.gl/XPCGYE

implementation   line count of main in assembly
MFortin1Check103 - 11 labels =  92 instructions
MFortin2Check135 - 13 labels = 122 instructions

Now I'm baffled. Adding a test makes the code shorter? It actually make 
the standalone functions longer by 8 instructions (as expected), but 
the inlined version is shorter. My guess is that the optimizer creates 
something entirely different and it turns out that this different 
version optimises better after inlining.


That said, my test main function isn't very representative of the 
general case because the length is statically known by the optimizer. 
Let's see what it does when the length is not statically known:


http://goo.gl/E2Q0Yu

implementation   line count of main in assembly
andralex 384 -  9 labels = 375 instructions
MFortin1 173 - 19 labels = 154 instructions
MFortin2 182 - 18 labels = 164 instructions
safety0ff -- not inlined --
dnspies   -- not inlined --
CWilliams229 - 23 labels = 206 instructions
MFortin1Check211 - 24 labels = 187 instructions
MFortin2Check218 - 21 labels = 197 instructions

So again MFortin1 is the winner for compactness. Still, I maintain that 
we ought to at least check the length of the array for multi-byte 
sequences to protect against incomplete sequences that could lay at the 
end of the string, so I'd favor MFortin1Check.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-24 Thread Michel Fortin

On 2014-03-24 13:39:45 +, Michel Fortin michel.for...@michelf.ca said:

That said, my test main function isn't very representative of the 
general case because the length is statically known by the optimizer. 
Let's see what it does when the length is not statically known:


Oops, wrong link, was from an intermediary test.
This one will give you the results in the table:
http://goo.gl/lwU4hv


implementation   line count of main in assembly
andralex 384 -  9 labels = 375 instructions
MFortin1 173 - 19 labels = 154 instructions
MFortin2 182 - 18 labels = 164 instructions
safety0ff -- not inlined --
dnspies   -- not inlined --
CWilliams229 - 23 labels = 206 instructions
MFortin1Check211 - 24 labels = 187 instructions
MFortin2Check218 - 21 labels = 197 instructions



--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Should we deprecate comma?

2014-03-24 Thread Michel Fortin
On 2014-03-24 16:42:59 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



tuple()
tuple(a)
tuple(a, b)
tuple(a, b, c)


struct()
struct(a)
struct(a, b)
struct(a, b, c)

Tuples are actually nameless structs, no?

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-23 Thread Michel Fortin
On 2014-03-23 21:22:58 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



Here's a baseline: http://goo.gl/91vIGc. Destroy!


Optimizing for smallest assembly size:

dchar front(char[] s)
{
 size_t bytesize;
 dchar result;
 switch (s[0])
 {
   case 0b: .. case 0b0111:
return s[0];
   case 0b1100: .. case 0b1101:
return ((s[0]  0b0001)  6) | (s[1]  0b0001);
   case 0b1110: .. case 0b1110:
result = s[0]  0b;
bytesize = 3;
break;
   case 0b: .. case 0b0111:
result = s[0]  0b0111;
bytesize = 4;
   default:
  return dchar.init;
 }
 foreach (i; 1..bytesize)
 result = (result  6) | (s[i]  0b0011);
 return result;
}

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-23 Thread Michel Fortin
On 2014-03-23 22:58:32 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



Array bounds checking takes care of that.


If you want the smallest assembly size with array bound checking, make 
the function @safe and see the disaster it causes to the assembly size. 
That's what you have to optimize. If you're going to optimize while 
looking at the assembly, better check for bounds manually:


dchar front(char[] s)
{
 if (s.length  1) return dchar.init;
 size_t bytesize;
 dchar result;
 switch (s[0])
 {
   case 0b: .. case 0b0111:
return s[0];
   case 0b1100: .. case 0b1101:
result = s[0]  0b0001;
bytesize = 2;
break;
   case 0b1110: .. case 0b1110:
result = s[0]  0b;
bytesize = 3;
break;
   case 0b: .. case 0b0111:
result = s[0]  0b0111;
bytesize = 4;
   default:
  return dchar.init;
 }
 if (s.length  bytesize) return dchar.init;
 foreach (i; 1..bytesize)
 result = (result  6) | (s[i]  0b0011);
 return result;
}



--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-23 Thread Michel Fortin
On 2014-03-24 02:25:17 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



On 3/23/14, 6:53 PM, Michel Fortin wrote:

On 2014-03-23 21:22:58 +, Andrei Alexandrescu
seewebsiteforem...@erdani.org said:


Here's a baseline: http://goo.gl/91vIGc. Destroy!


Optimizing for smallest assembly size:

dchar front(char[] s)
{
  size_t bytesize;
  dchar result;
  switch (s[0])
  {
case 0b: .. case 0b0111:
return s[0];
case 0b1100: .. case 0b1101:
return ((s[0]  0b0001)  6) | (s[1]  0b0001);
case 0b1110: .. case 0b1110:
result = s[0]  0b;
bytesize = 3;
break;
case 0b: .. case 0b0111:
result = s[0]  0b0111;
bytesize = 4;
default:
   return dchar.init;
  }
  foreach (i; 1..bytesize)
  result = (result  6) | (s[i]  0b0011);
  return result;
}


Nice, thanks! I'd hope for a short path for the ASCII subset, could you 
achieve that?


Unfortunately, there's a bug in the above. A missing break results in 
a fallthrough to default case. That's why the optimizer is so good, it 
just omits the four-byte branch entirely. I noticed something was wrong 
by looking at the assembly. I really wish D had no implicit fallthrough.


But try this instead, the result is even shorter:

dchar front(char[] s)
{
 if (s[0]  0b100)
   return s[0]; // ASCII

 // pattern indicator  tailLength
 // 0b100x  0b00 (0)   1
 // 0b101x  0b01 (1)   1 == indicator
 // 0b110x  0b10 (2)   2 == indicator
 // 0b111x  0b11 (3)   3 == indicator
 // note: undefined result for illegal 0b case

 auto indicator = (s[0]  5)  0b11;
 auto tailLength = indicator ? indicator : 1;

 dchar result = s[0]  (0b0011  tailLength);
 foreach (i; 0..tailLength)
 result = (result  6) | (s[1+i]  0b0011);
 return result;
}

(Disclaimer: not tested, but I did check that all the expected code 
paths are present in the assembly this time.)


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-23 Thread Michel Fortin

On 2014-03-24 04:39:08 +, bearophile bearophileh...@lycos.com said:


Michel Fortin:


I really wish D had no implicit fallthrough.


Isn't your wish already granted?


Maybe. I don't have a D compiler installed at the moment to check. I'm 
just playing with d.godbolt.org and it accepts implicit fallthrough.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-23 Thread Michel Fortin

On 2014-03-24 04:37:22 +, Michel Fortin michel.for...@michelf.ca said:


But try this instead, the result is even shorter:


Oops, messed up my patterns. Here's a hopefully fixed front():

dchar front(char[] s)
{
 if (s[0]  0b100)
   return s[0]; // ASCII

 // pattern indicator  tailLength
 // 0b1100  0b00 (0)   1
 // 0b1101  0b01 (1)   1 == indicator
 // 0b1110  0b10 (2)   2 == indicator
 // 0b  0b11 (3)   3 == indicator
 // note: undefined result for illegal 0b1xxx case

 auto indicator = (s[0]  4)  0b11;
 auto tailLength = indicator ? indicator : 1;

 dchar result = s[0]  (0b0011  tailLength);
 foreach (i; 0..tailLength)
 result = (result  6) | (s[1+i]  0b0011);
 return result;
}

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-23 Thread Michel Fortin
On 2014-03-24 05:09:15 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



On 3/23/14, 9:49 PM, Michel Fortin wrote:



http://goo.gl/y9EVFr

Andrei


As I said earlier, that front2 version is broken. It's missing a break. 
Adding the break makes it non-interesting from an instruction count 
point of view.


Also, there's still an issue in my code above (found by safetyOff): the 
first if for ASCII should use 0b1000_, not 0b100 (missing one 
zero). Here's the corrected (again) version of front1:


dchar front1(char[] s)
{
 if (s[0]  0b1000_)
   return s[0]; // ASCII

 // pattern indicator  tailLength
 // 0b1100  0b00 (0)   1
 // 0b1101  0b01 (1)   1 == indicator
 // 0b1110  0b10 (2)   2 == indicator
 // 0b  0b11 (3)   3 == indicator
 // note: undefined result for illegal 0b1xxx case

 auto indicator = (s[0]  4)  0b11;
 auto tailLength = indicator ? indicator : 1;

 dchar result = s[0]  (0b0011_  tailLength);
 foreach (i; 0..tailLength)
 result = (result  6) | (s[1+i]  0b0011_);
 return result;
}

And I'm going to suggest a variant of the above with one less branch 
(but three more instructions): look at how tailLenght is computed by 
or'ing with the negative of bit 2 of indicator. I suspect it'll be 
faster with non-ASCII input, unless it gets inlined less.


dchar front2(char[] s)
{
 if (s[0]  0b1000_)
   return s[0]; // ASCII

 // pattern indicator  tailLength
 // 0b1100  0b00 (0)   1
 // 0b1101  0b01 (1)   1
 // 0b1110  0b10 (2)   2
 // 0b  0b11 (3)   3
 // note: undefined result for illegal 0b1xxx case

 auto indicator = ((s[0]  4)  0b11);
 auto tailLength = indicator | ((~indicator  1)  0b1);

 dchar result = s[0]  (0b0011_  tailLength);
 foreach (i; 0..tailLength)
 result = (result  6) | (s[1+i]  0b0011_);
 return result;
}

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Challenge: write a really really small front() for UTF8

2014-03-23 Thread Michel Fortin

On 2014-03-24 04:58:08 +, safety0ff safety0ff@gmail.com said:


0b100 is missing a zero: 0b1000_


Indeed, thanks.


Fixing that, I still get a range violation from s[1+i].


That auto indicator = (s[0]  5)  0b11; line is wrong too. s[0] 
needs a shift by 4, not by 5. No doubt it crashes your test program.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Final by default?

2014-03-17 Thread Michel Fortin

On 2014-03-17 01:20:37 +, Walter Bright newshou...@digitalmars.com said:


On 3/15/2014 6:44 AM, Johannes Pfau wrote:

Then in cairo.d
version(CAIRO_HAS_PNG_SUPPORT)
{
extern(C) int cairo_save_png(char* x);
void savePNG(string x){cairo_save_png(toStringz(x))};
}


try adding:

   else
   {
void savePNG(string x) { }
   }

and then your users can just call savePNG without checking the version.


Adding a stub that does nothing, not even a runtime error, isn't a very 
good solution in my book. If this function call should fail, it should 
fail early and noisily.


So here's my suggestion: use a template function for the wrapper.

extern(C) int cairo_save_png(char* x);
void savePNG()(string x){cairo_save_png(toStringz(x));}

If you call it somewhere it and cairo_save_png was not compiled in 
Cairo, you'll get a link-time error (undefined symbol cairo_save_png). 
If you don't call savePNG anyhere there's no issue because savePNG was 
never instantiated.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: TLBB: The Last Big Breakage

2014-03-16 Thread Michel Fortin
On 2014-03-16 04:08:18 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:


Fixing that has not gained focus until recently, when e.g. 
https://github.com/D-Programming-Language/dmd/pull/3067 has come about.


Synchronized classes should be trashed.

The whole concept is very prone to mistakes that could cause deadlocks 
and offers no simple path to fix those errors once they're found. The 
concept encourage people to keep locks longer than needed to access the 
data. For one thing is bad for performance. It also makes callbacks 
happen while the lock is held, which has a potential for deadlock if 
the callback locks something else (through synchronized or other means).


Sure, there are safe ways to implement a synchronized class: you have 
to use it solely as a data holder that does nothing else than store a 
couple of variables and provide accessors to that data. Then you build 
the business logic -- calculations, callbacks, observers -- in a 
separate class that holds your synchronized class but does the work 
outside of the lock.


The problem is that it's a very unnatural way to think of classes. Also 
you have a lot of boilerplate code to write (synchronized class + 
accessors) for every piece of synchronized data you want to hold. I bet 
most people will not bother and won't realize that deadlocks could 
happen.


Is there any example of supposedly well-written synchronized classes in 
the wild that I could review looking for that problem?


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Final by default?

2014-03-15 Thread Michel Fortin

On 2014-03-15 18:18:27 +, Walter Bright newshou...@digitalmars.com said:


On 3/15/2014 2:21 AM, Paulo Pinto wrote:

In any language with properties, accessors also allow for:

- lazy initialization

- changing the underlying data representation without requiring client code to
be rewritten

- implement access optimizations if the data is too costly to keep around


You can always add a property function later without changing user code.


In some alternate universe where clients restrict themselves to 
documented uses of APIs yes. Not if the client decides he want to use 
++ on the variable, or take its address, or pass it by ref to another 
function (perhaps without even noticing).


And it also breaks binary compatibility.

If you control the whole code base it's reasonable to say you won't 
bother with properties until they're actually needed for some reason. 
It's easy enough to refactor your things whenever you decide to make 
the change.


But if you're developing a library for other to use though, it's better 
to be restrictive from the start... if you care about not breaking your 
client's code base that is. It basically comes to the same reasons as 
to why final-by-default is better than virtual-by-default: it's better 
to start with a restrictive API and then expand the API as needed than 
being stuck with an API that restricts your implementation choices 
later on.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Compiler updating user code

2014-03-14 Thread Michel Fortin

On 2014-03-14 05:14:52 +, Manu turkey...@gmail.com said:


So it comes up fairly regularly that people suggest that the compiler
should have a mode where it may update user code automatically to assist
migration to new compiler versions.

I'm personally against the idea, and Walter certainly doesn't like it, but
it occurred to me that a slight variation on this idea might be awesome.

Imagine instead, an '-update' option which instead of modifying your code,
would output a .patch file containing suggested amendments wherever it
encountered deprecated code...
The user can then take this patch file, inspect it visually using their
favourite merge tool, and pick and choose the bits that they agree or
disagree with.

I would say this approach takes a dubious feature and turns it into a
spectacular feature!


If you're using a version control system, it's probably simpler to just 
apply the patch automatically and then review it like you'd review any 
change you're ready to commit, while tweaking the changes if needed.


But before you can consider building such a tool, you'll have to 
convince Walter that the locations of tokens should be tracked more 
precisely in the frontend. Currently the frontend remembers only the 
file and line of any token it finds. You can't implement a refactoring 
with that. Last time that came in in a discussion (about error messages 
showing the exact location of the error), the idea was shut down on the 
ground that storing better location info would slow down the compiler.




Language changes are probably easy enough to handle, but what about cases
of 'deprecated' in the library?
It's conceivable that the deprecated keyword could take an optional
argument to a CTFE function which would receive the expression as a string,
and the function could transform and return an amended string which would
also be added to the output patch file. This way, the feature could
conceivably also offer upgrade advice for arbitrary library changes.

Considering the advice in the context of a visual diff/merge window would
be awesome if you ask me.


Xcode has a refactoring for transitioning to ARC, and it works by 
presenting you a merge view of the changes, which you can edit, before 
saving them. I'm sure other IDEs can do that too.

http://cdn5.raywenderlich.com/wp-content/uploads/2012/01/Review-Changes.png

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Compiler updating user code

2014-03-14 Thread Michel Fortin
On 2014-03-14 12:46:33 +, Andrej Mitrovic 
andrej.mitrov...@gmail.com said:



2) The feature will only work with the latest compiler. In other
words, if you want to update a 2.070 codebase to 2.072, you will
actually need to use two compilers. The porting tool released with DMD
2.071 to port 2.070 to 2.071, and the tool released with 2.072 to port
2.071 to 2.072.


I see what you mean. The moment the parser can't recognize the old 
thing, you can't migrate.


But it doesn't have to be as you say. You can keep things working in a 
deprecated state for longer than one version. As long as the old thing 
can be parsed, it can be updated.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Final by default?

2014-03-14 Thread Michel Fortin
On 2014-03-14 15:17:08 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



A few possibilities discussed around here:

!final
~final
final(false)
@disable final

I've had an epiphany literally a few seconds ago that final(false) 
has the advantage of being generalizable to final(bool) taking any 
CTFE-able Boolean.


On occasion I needed a computed qualifier (I think there's code in 
Phobos like that) and the only way I could do it was through ugly code 
duplication or odd mixin-generated code. Allowing computed 
qualifiers/attributes would be a very elegant and general approach, and 
plays beautifully into the strength of D and our current investment in 
Boolean compile-time predicates.


final(bool) is my preferred solution too.

It certainly is more verbose than 'virtual', but it offers more 
possibilities. Also, the pattern is already somewhat established with 
align(int).


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: inlining...

2014-03-14 Thread Michel Fortin

On 2014-03-14 17:57:59 +, Jacob Carlborg d...@me.com said:


int output = mixin func(10); // the 'mixin' keyword seems to kinda 'get


I think this is the best syntax of these three alternatives.


Maybe, but what does it do? Should it just inline the call to func? Or 
should it inline recursively every call inside func? Or maybe something 
in the middle?


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Compiler updating user code

2014-03-14 Thread Michel Fortin

On 2014-03-14 18:22:19 +, Jacob Carlborg d...@me.com said:


On 2014-03-14 13:09, Michel Fortin wrote:


But before you can consider building such a tool, you'll have to
convince Walter that the locations of tokens should be tracked more
precisely in the frontend. Currently the frontend remembers only the
file and line of any token it finds. You can't implement a refactoring
with that. Last time that came in in a discussion (about error messages
showing the exact location of the error), the idea was shut down on the
ground that storing better location info would slow down the compiler.


This as already been implemented [1]. DMD (git master) now supports the 
-vcolumns flag to display error message with information about the 
column.


[1] https://github.com/D-Programming-Language/dmd/pull/3077


Oh, I missed that. Great!

I'm still not entirely sure it retains enough information to perform a 
refactoring, but it's a start.


-- s
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Compiler updating user code

2014-03-14 Thread Michel Fortin

On 2014-03-14 18:22:19 +, Jacob Carlborg d...@me.com said:


On 2014-03-14 13:09, Michel Fortin wrote:


But before you can consider building such a tool, you'll have to
convince Walter that the locations of tokens should be tracked more
precisely in the frontend. Currently the frontend remembers only the
file and line of any token it finds. You can't implement a refactoring
with that. Last time that came in in a discussion (about error messages
showing the exact location of the error), the idea was shut down on the
ground that storing better location info would slow down the compiler.


This as already been implemented [1]. DMD (git master) now supports the 
-vcolumns flag to display error message with information about the 
column.


[1] https://github.com/D-Programming-Language/dmd/pull/3077


Makes me think, instead of generating patches or altering code, we 
could take the clang approach and simply suggest fixes during 
compilation. This is what suggested fixes looks like in clang:


page.mm:67:8: warning: using the result of an assignment as a condition 
without parentheses [-Wparentheses]

        if (x = 4)
            ~~^~~
page.mm:67:8: note: place parentheses around the assignment to silence 
this warning

        if (x = 4)
              ^
            (    )
page.mm:67:8: note: use '==' to turn this assignment into an equality 
comparison

        if (x = 4)
              ^
              ==

The two note: messages are the two possible suggested fixes.

If you're using Xcode, the IDE will actually parse those notes and let 
you choose between the two proposed fixes in a popup menu.

http://i.stack.imgur.com/UqIqm.png

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Compiler updating user code

2014-03-14 Thread Michel Fortin

On 2014-03-14 21:36:56 +, Jacob Carlborg d...@me.com said:


On 2014-03-14 20:26, Michel Fortin wrote:


Makes me think, instead of generating patches or altering code, we could
take the clang approach and simply suggest fixes during compilation.
This is what suggested fixes looks like in clang:


I don't really think this is the same. It also adds a lot work since 
the isn't making the actual change.


It's giving you all the data needed to make the change.

If Xcode can parse this output and patch your code with one click in 
the editor, then you can also build a tool parsing the output and 
generating those patches automatically, as long as there's no ambiguity 
about the fix. Or if there's ambiguity, you could have an interactive 
command line tool ask how you want it patched.


The only difference is the UI around the compiler.

--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Final by default?

2014-03-14 Thread Michel Fortin

On 2014-03-14 20:51:08 +, monarch_dodra monarchdo...@gmail.com said:


I hate code commented out in an #if 0 with a passion. Just... Why?


Better this:


#if 0
...
#else
...
#endif


than this:


/*
...
/*/
...
//*/


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: D/Objective-C 64bit

2014-03-13 Thread Michel Fortin

On 2014-03-13 18:13:44 +, Jacob Carlborg d...@me.com said:


On 2014-03-13 17:16, Johannes Pfau wrote:


Is it possible to split objc.c into two files, one for backend
interfacing functions (ObjcSymbols) and one for the generic frontend
stuff?


I would guess so. I would need to take a look to see how coupled the 
code in objc.c is. Although, most code is for backend.


I think that'd be a good idea too. When I wrote that code I wanted 
everything to be close by as it was easier to experiment, but there's 
no need for that now.


Perhaps, instead of splitting, classes derived from frontend classes 
(Expression, Declaration, Type) should be moved to their corresponding 
files and live with other similar classes of the frontend, protected in 
#if DMD_OBJC blocks.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: D/Objective-C 64bit

2014-03-12 Thread Michel Fortin

On 2014-03-12 08:06:47 +, w0rp devw...@gmail.com said:

This is really awesome work. If you combined ARM support with Objective 
C support, it would mean you could write iOS programs in D without much 
frustration, and that would be a huge step forward. Objective C has a 
good runtime, but lacks templates and CTFE. Using CTFE for an iOS 
program could be very cool.


How do you plan to handle automatic reference counting? I imagine 
that's a hard part. When I was writing Objective C I remember having to 
write bridged casts so I could manually extend or limit object 
lifetime, but I never handled it from within a C library.


Well, there's three ways.

(a) The first one is to implement ARC for Objective-C objects, and to 
automatically add/remove roots to member variables when 
constructing/destroying Objective-C objects that were defined in D so 
the GC can those pointers.


(b) The second one is to not implement ARC and implement something in 
the GC so it can track Objective-C objects: retain them on first sight, 
release them once no longer connected to a root.


(c) The third one is to implement ARC as an alternative memory 
management scheme for D and bolt Objective-C object support on top of 
it.


I'd tend to go for (a) at first, as it's the simplest thing that can be 
done. But I fear always adding/removing roots will impact performance 
in a negative way. There's also the issue in (a) and (b) that if the 
last reference to an object is released from the GC thread the 
Objective-C object's destructor will be called in a different thread 
than what is expected which might cause some bugs. So we might want to 
implement (c) later on to have something more solid and deterministic.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: D/Objective-C 64bit

2014-03-12 Thread Michel Fortin

On 2014-03-12 09:26:56 +, Iain Buclaw ibuc...@gdcproject.org said:


On 12 March 2014 07:10, Jacob Carlborg d...@me.com wrote:

Yeah, since Objective-C uses the C calling convention it's mostly about
outputting symbols and data to the object files.


In what ABI may I ask?  Your choices are:
- Traditional (32bit) ABI without properties and Obj-C 2.0 additions
- Traditional (32bit) ABI with properties and Obj-C 2.0 additions
- Modern (64bit) ABI


I made the 32-bit legacy runtime support, Jacob added the 64-bit modern 
runtime support.


There's no support at this time for properties declarations in the ABI, 
but it doesn't really have much impact. As far as I'm aware, 
Objective-C 2.0 additions only include property declarations and 
attributes in the ABI.




That can be mixed in with either:
- GNU Runtime ABI
- NeXT Runtime ABI


It's been tested with the Apple (NeXT) runtime only. In all honesty, I, 
and probably most people out there, don't care about the GNU runtime. 
Although probably the GCC guys do. Do you think it'd make it more 
difficult to merge GCC in the GCC project if it had support for Apple's 
runtime and not for the GNU one?


Also, is there a list of differences between the two runtimes somewhere?



Each combination being incompatible with each other subtly different ways...


Which is why we have a test suite.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: D/Objective-C 64bit

2014-03-12 Thread Michel Fortin
On 2014-03-12 17:53:35 +, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



On 3/12/14, 12:15 AM, Jacob Carlborg wrote:

On Wednesday, 12 March 2014 at 01:45:38 UTC, Andrei Alexandrescu wrote:


Great. Jacob, what's your plan to take this forward? We're very
interested in merging this as part of the official D compiler.


In theory I could create a pull request tonight. It depends on what
state we need the language support to be in. As I said exceptions are
missing on 64bit. But on the other hand when support for C++ was
introduced in D it had very limited support.

One idea is to merge the changes but wait with enabling the languages
changes. The test machines could run the tests with the changes enabled.


I'll defer to domain experts on this one. Please advise.


If the compiler is going to be converted to the D language (how is that 
progressing?), it'd probably be better to merge before that, otherwise 
it'll be a lot of work to port all those changes.


The first question should about the review process. This patch touches 
a lot of things, so I wonder if Walter will be confortable reviewing 
it. Should different people review different parts? Here's a comparison 
view:


DMD:  94 changed files with 8,005 additions and 48 deletions.
https://github.com/jacob-carlborg/dmd/compare/d-objc

druntime:  10 changed files with 1,263 additions and 0 deletions.
https://github.com/jacob-carlborg/druntime/compare/d-objc

Most of the changes to the compiler are inside #if DMD_OBJC/#endif 
blocks. Changes outside of those blocks shouldn't affect the semantics 
or the binary output of existing code. So I think a review could be 
done in two steps:


1. Review changes outside of those #if DMD_OBJC blocks. Those are the 
most critical changes as they'll affect the next version of the 
compiler that'll ship (I'm assuming Objective-C features won't be 
turned on until they're more usable). This includes some changes in the 
lexer, but it shouldn't affect current D code. This review could 
exclude the two files objc.h/objc.c, since the makefile ignores them 
without the D_OBJC flag.


2. Maybe review things inside of those #if DMD_OBJC blocks. Those 
things won't affect the compiler unless compiled with the D_OBJC flag, 
so it's less critical to review them. Most of them are there to 
implement Objective-C semantics so you'll need to be somewhat familiar 
with Objective-C to judge whether they're correct or not. What should 
be checked is whether an error would make them affect non-Objective-C 
constructs when they're compiled in.


We also need to know what to do about the test suite. I made a separate 
test suite for D/Objective-C since those tests can only run on OS X and 
only with the compiler compiled with Objective-C support enabled. It 
could easily be merged with the main test suite, but the tests should 
be made conditional to whether the compiler is compiled with 
Objective-C or not.



--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



  1   2   3   4   5   6   7   8   9   10   >