Re: Remus

2012-10-30 Thread Namespace

On Monday, 29 October 2012 at 22:09:02 UTC, bearophile wrote:

Namespace:


Not interested, huh? Funny, that I had not expected.


Maybe they appreciate more something that improves the life of 
regular D programmers. There are many possible ways to do that, 
like trying to design features that quite probably will be 
added to D, or trying library ideas that will become part of 
Phobos, trying new GC features, trying new Phobos modules, and 
so on and on.


Otherwise you risk creating another Delight 
(http://delight.sourceforge.net/ ) that no one uses, it's just 
a waste of time for you too.


Bye,
bearophile


Yes, but I need input. Tell me some ideas and I'll try to 
implement them. So you could just test new features in the real 
world, instead of just talking about them theoretically.
And it is not 'waste of time'. Me and my fellow students use D as 
early as the second Semester for almost all university projects. 
But as '(pre) compiler' we use Remus just because we miss some 
features, such as not-null references, since the first week. And 
it's a damn good exercise to understand, how a compiler works. :)


Re: Remus

2012-10-30 Thread Rory McGuire
It would be really awesome if you could play around with making the AST
available during compilation so we can alter it using ctfe.


On Tue, Oct 30, 2012 at 8:34 AM, Namespace rswhi...@googlemail.com wrote:

 On Monday, 29 October 2012 at 22:09:02 UTC, bearophile wrote:

 Namespace:

  Not interested, huh? Funny, that I had not expected.


 Maybe they appreciate more something that improves the life of regular D
 programmers. There are many possible ways to do that, like trying to design
 features that quite probably will be added to D, or trying library ideas
 that will become part of Phobos, trying new GC features, trying new Phobos
 modules, and so on and on.

 Otherwise you risk creating another Delight (http://delight.sourceforge.*
 *net/ http://delight.sourceforge.net/ ) that no one uses, it's just a
 waste of time for you too.

 Bye,
 bearophile


 Yes, but I need input. Tell me some ideas and I'll try to implement them.
 So you could just test new features in the real world, instead of just
 talking about them theoretically.
 And it is not 'waste of time'. Me and my fellow students use D as early as
 the second Semester for almost all university projects. But as '(pre)
 compiler' we use Remus just because we miss some features, such as not-null
 references, since the first week. And it's a damn good exercise to
 understand, how a compiler works. :)



Re: Remus

2012-10-30 Thread Philippe Sigaud
 It would be really awesome if you could play around with making the AST
 available during compilation so we can alter it using ctfe.

I have a compile-time parser and code generator project here:

https://github.com/PhilippeSigaud/Pegged

We are adding a D grammar in there and there is a compile-time D
parser (some current transatlantic cable problem make github act
erratically from Europe, but it's transitory).
Pegged gives you a compile-time parse tree, which can then be
manipulated with CTFE and transformed back into other code (the last
part is not implemented for D specifically, but I have other
tree-transformation function and they work alright at compile-time.

Next step (2013?) would be to have a working macro system for D.

Philippe


Re: Abstract Database Interface

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 04:22, BLM768 wrote:


If you make x some fancy wrapper type containing more fancy wrapper
types with overloaded equality operators that return some sort of
Expression class instead of a boolean, you might actually be able to get
this to work with only D's current features. However, that would kind of
destroy the hope of efficiency. :)


It can probably all be handled at compile time. The problem with this 
that you cannot overload the following operators: , ||, != and 
probably some other useful operators.



What might be nice is a database written in D that completely eschews
SQL in favor of a native API. I might have to play with that eventually,
but I'll probably give it a while because it would be a huge project,
and, like most people, I'm under time constraints. :)


Yeah, I know.

--
/Jacob Carlborg


Re: Remus

2012-10-30 Thread Jacob Carlborg

On 2012-10-29 23:24, Rob T wrote:


Namespaces can be useful for organizational reasons. For example they
can be used for grouping a collection of items under one roof. However
you can already accomplish this and more using struct along with static
members.

struct io
{
static
{
 void print() { writeln(foo);}
}
}

io.print();

Plus struct's come with additional abilities that can turn a simple
namespace into a much more capable one, for example by adding in ctors
and dtors.


Or using a template.

--
/Jacob Carlborg


Re: Abstract Database Interface

2012-10-30 Thread Philippe Sigaud
On Tue, Oct 30, 2012 at 9:15 AM, Jacob Carlborg d...@me.com wrote:
 On 2012-10-30 04:22, BLM768 wrote:

 If you make x some fancy wrapper type containing more fancy wrapper
 types with overloaded equality operators that return some sort of
 Expression class instead of a boolean, you might actually be able to get
 this to work with only D's current features. However, that would kind of
 destroy the hope of efficiency. :)


 It can probably all be handled at compile time. The problem with this that
 you cannot overload the following operators: , ||, != and probably some
 other useful operators.

 and || can be replaced by  and |, so there is a workaround.
I feel much more limited by != and, even more problematic, !.  Maybe
unary - could be used in lieu of !.


Re: Abstract Database Interface

2012-10-30 Thread Kapps
On Tuesday, 30 October 2012 at 10:01:06 UTC, Philippe Sigaud 
wrote:
On Tue, Oct 30, 2012 at 9:15 AM, Jacob Carlborg d...@me.com 
wrote:

On 2012-10-30 04:22, BLM768 wrote:

If you make x some fancy wrapper type containing more fancy 
wrapper
types with overloaded equality operators that return some 
sort of
Expression class instead of a boolean, you might actually be 
able to get
this to work with only D's current features. However, that 
would kind of

destroy the hope of efficiency. :)



It can probably all be handled at compile time. The problem 
with this that
you cannot overload the following operators: , ||, != and 
probably some

other useful operators.


 and || can be replaced by  and |, so there is a workaround.
I feel much more limited by != and, even more problematic, !.  
Maybe

unary - could be used in lieu of !.


There was a pull request for __traits(codeof, func) that would 
return the code for a symbol including lambda methods. It would 
probably be easier to have something like that for getting the 
AST and then using that to generate SQL queires (this is how C# / 
LINQ does it) than using sketchy hacks that go against natural 
language feel. Though it wouldn't be particularly easy to get 
that in to the compiler apparently due to AST rewriting issues.


https://github.com/D-Programming-Language/dmd/pull/953



Re: Abstract Database Interface

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 13:04, Kapps wrote:


There was a pull request for __traits(codeof, func) that would return
the code for a symbol including lambda methods. It would probably be
easier to have something like that for getting the AST and then using
that to generate SQL queires (this is how C# / LINQ does it) than using
sketchy hacks that go against natural language feel. Though it wouldn't
be particularly easy to get that in to the compiler apparently due to
AST rewriting issues.


How would that work in this case, the code need to compile? I mean, even 
if you can get the syntax of a function, process it correctly and 
generate SQL from it, the function still need to compile.


--
/Jacob Carlborg


Re: Abstract Database Interface

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 10:59, Philippe Sigaud wrote:


 and || can be replaced by  and |, so there is a workaround.
I feel much more limited by != and, even more problematic, !.  Maybe
unary - could be used in lieu of !.


How does that work with operator precedence? There's a plugin for 
ActiveRecord, called Squeel,  that allows you to do something like this:


Person.where do |q|
  (q.name == asd) 
  (q.address == foo)
end

But because of the operator precedence in Ruby you need to wrap every 
comparison in parentheses, not very pretty.


--
/Jacob Carlborg


Re: Abstract Database Interface

2012-10-30 Thread Philippe Sigaud
On Tue, Oct 30, 2012 at 3:44 PM, Jacob Carlborg d...@me.com wrote:

 How does that work with operator precedence?
(...)
 But because of the operator precedence in Ruby you need to wrap every
 comparison in parentheses, not very pretty.

I think the problem would the same here. Of course, to know D operator
precedence, you have to dig into the grammar, since there is no handy
table to give you that info :)


Re: Remus

2012-10-30 Thread Namespace

On Tuesday, 30 October 2012 at 07:25:14 UTC, Rob T wrote:

On Tuesday, 30 October 2012 at 06:34:26 UTC, Namespace wrote:
Yes, but I need input. Tell me some ideas and I'll try to 
implement them. So you could just test new features in the 
real world, instead of just talking about them theoretically.
And it is not 'waste of time'. Me and my fellow students use D 
as early as the second Semester for almost all university 
projects. But as '(pre) compiler' we use Remus just because we 
miss some features, such as not-null references, since the 
first week. And it's a damn good exercise to understand, how a 
compiler works. :)


I can see the value in testing ideas out in practice. I cannot 
see C++ style namespaces being all that useful given that there 
are much better alternatives already available, non-nullable 
references and AST macros would be very nice to try out. Wish 
we had these features in the real D right now.


--rt


I like them. But if so many people against them, I can implement 
a voting to deprecate this feature. Not-null references are 
already available.


Re: Remus

2012-10-30 Thread Dmitry Olshansky

10/30/2012 8:09 AM, Philippe Sigaud пишет:

It would be really awesome if you could play around with making the AST
available during compilation so we can alter it using ctfe.


I have a compile-time parser and code generator project here:

https://github.com/PhilippeSigaud/Pegged

We are adding a D grammar in there and there is a compile-time D
parser (some current transatlantic cable problem make github act
erratically from Europe, but it's transitory).
Pegged gives you a compile-time parse tree, which can then be
manipulated with CTFE and transformed back into other code (the last
part is not implemented for D specifically, but I have other
tree-transformation function and they work alright at compile-time.



Cool. Reminds myself that I need to find some more time to play with it 
(and the source).


--
Dmitry Olshansky


Objects in a Templated World

2012-10-30 Thread Jesse Phillips

I've written an article which goes over templates and objects.

http://nascent.freeshell.org/programming/D/objectTemplate.php

On a similar note I've republished _Learning to Program Using D_. 
Not a whole lot of change on the content front. Some expansions 
on existing chapters and a few fillers were added. Still very 
unfinished at around 50 pages.


http://nascent.freeshell.org/programming/D/LearningWithD/

I include a generated PDF and a pre.tex file. What is probably of 
more interest to others writing D books in Latex is I have a 
program which handles building running and capturing output for 
the final tex file. It is very picky about formatting and can't 
handle file includes and probably many other fancy Latex options 
but it is mine so :P


https://github.com/JesseKPhillips/listings-dlang-extractor

And finally code uses the listings package, for which I have 
provided a style file to handle highlighting.


https://github.com/JesseKPhillips/dlang-latex-listings



Re: Objects in a Templated World

2012-10-30 Thread Alex Rønne Petersen

On 30-10-2012 19:23, Jesse Phillips wrote:

I've written an article which goes over templates and objects.

http://nascent.freeshell.org/programming/D/objectTemplate.php

On a similar note I've republished _Learning to Program Using D_. Not a
whole lot of change on the content front. Some expansions on existing
chapters and a few fillers were added. Still very unfinished at around
50 pages.

http://nascent.freeshell.org/programming/D/LearningWithD/

I include a generated PDF and a pre.tex file. What is probably of more
interest to others writing D books in Latex is I have a program which
handles building running and capturing output for the final tex file.
It is very picky about formatting and can't handle file includes and
probably many other fancy Latex options but it is mine so :P

https://github.com/JesseKPhillips/listings-dlang-extractor

And finally code uses the listings package, for which I have provided a
style file to handle highlighting.

https://github.com/JesseKPhillips/dlang-latex-listings



I think you should cover C#. It allows virtual generic methods in its 
implementation of reified generics by relying on the JIT.


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Objects in a Templated World

2012-10-30 Thread Jesse Phillips
On Tuesday, 30 October 2012 at 18:27:39 UTC, Alex Rønne Petersen 
wrote:

On 30-10-2012 19:23, Jesse Phillips wrote:
I think you should cover C#. It allows virtual generic methods 
in its implementation of reified generics by relying on the JIT.


Sounds like a good idea, I'll have to dig into it since at this 
time I don't really understand what that means it is doing.


Re: Remus

2012-10-30 Thread Namespace
Small update: cast with 'as' now works. A little syntax sugar. 
And that's about the last feature, which I will implement. Maybe 
some of you will test remus and/or find a few bugs, or other 
suggest other features that they would like to see in remus. I 
look forward to suggestions. :)


Re: Objects in a Templated World

2012-10-30 Thread Jesse Phillips

On Tuesday, 30 October 2012 at 18:59:24 UTC, Jesse Phillips wrote:
On Tuesday, 30 October 2012 at 18:27:39 UTC, Alex Rønne 
Petersen wrote:

On 30-10-2012 19:23, Jesse Phillips wrote:
I think you should cover C#. It allows virtual generic methods 
in its implementation of reified generics by relying on the 
JIT.


Sounds like a good idea, I'll have to dig into it since at this 
time I don't really understand what that means it is doing.


Ok, didn't realize C# allowed free form generic methods. Good to 
know.


http://stackoverflow.com/questions/6573557/clr-how-virtual-generic-method-call-is-implemented

No time to update the article yet.


Re: Objects in a Templated World

2012-10-30 Thread deadalnix

Le 30/10/2012 19:27, Alex Rønne Petersen a écrit :

On 30-10-2012 19:23, Jesse Phillips wrote:

I've written an article which goes over templates and objects.

http://nascent.freeshell.org/programming/D/objectTemplate.php

On a similar note I've republished _Learning to Program Using D_. Not a
whole lot of change on the content front. Some expansions on existing
chapters and a few fillers were added. Still very unfinished at around
50 pages.

http://nascent.freeshell.org/programming/D/LearningWithD/

I include a generated PDF and a pre.tex file. What is probably of more
interest to others writing D books in Latex is I have a program which
handles building running and capturing output for the final tex file.
It is very picky about formatting and can't handle file includes and
probably many other fancy Latex options but it is mine so :P

https://github.com/JesseKPhillips/listings-dlang-extractor

And finally code uses the listings package, for which I have provided a
style file to handle highlighting.

https://github.com/JesseKPhillips/dlang-latex-listings



I think you should cover C#. It allows virtual generic methods in its
implementation of reified generics by relying on the JIT.



Wow, that is awesome ! Do you have some documentation on the dirty 
detail behind the technique ?


Re: Objects in a Templated World

2012-10-30 Thread Paulo Pinto

On Tuesday, 30 October 2012 at 19:48:30 UTC, Jesse Phillips wrote:
On Tuesday, 30 October 2012 at 18:59:24 UTC, Jesse Phillips 
wrote:
On Tuesday, 30 October 2012 at 18:27:39 UTC, Alex Rønne 
Petersen wrote:

On 30-10-2012 19:23, Jesse Phillips wrote:
I think you should cover C#. It allows virtual generic 
methods in its implementation of reified generics by relying 
on the JIT.


Sounds like a good idea, I'll have to dig into it since at 
this time I don't really understand what that means it is 
doing.


Ok, didn't realize C# allowed free form generic methods. Good 
to know.


http://stackoverflow.com/questions/6573557/clr-how-virtual-generic-method-call-is-implemented

No time to update the article yet.


Probably you should also have a look how Eiffel, Modula-3 and Ada 
implement generics.


They are quite similar to C++ and D, with the constraint that the 
programmer has to explicitly instantiate which types are used.


--
Paulo


Re: Abstract Database Interface

2012-10-30 Thread Timon Gehr

On 10/30/2012 04:47 PM, Philippe Sigaud wrote:

On Tue, Oct 30, 2012 at 3:44 PM, Jacob Carlborg d...@me.com wrote:


How does that work with operator precedence?

(...)

But because of the operator precedence in Ruby you need to wrap every
comparison in parentheses, not very pretty.


I think the problem would the same here. Of course, to know D operator
precedence, you have to dig into the grammar, since there is no handy
table to give you that info :)



From higher to lower, where relational ops are unordered with respect 
to bitwise ops (this is the reason comparisons would have to be wrapped 
in parentheses in D as well):


!
= (not a real operator, occurs twice this is binding power to the left)
. ++ -- ( [
^^ (right-associative)
 ++ -- * - + ! ~ (prefix)
* / %
+ - ~
  
== !=   = = ! ! != !=  ! = != in !in is !is

^
|

||
? (right-associative)
/= = |= -= += = = = = *= %= ^= ^^= ~= (right-associative)
= (not a real operator, occurs twice, this is binding power to the right)
,
.. (not a real operator)


Re: To avoid some linking errors

2012-10-30 Thread Daniel Murphy
Walter Bright newshou...@digitalmars.com wrote in message 
news:k6npgi$1hsr$1...@digitalmars.com...
 On 10/29/2012 9:51 PM, Daniel Murphy wrote: Walter Bright 
 newshou...@digitalmars.com wrote in message
  news:k6mun3$a8h$1...@digitalmars.com...
 
  The object file format does not support line numbers for symbol 
  references
  and definitions. None of the 4 supported ones (OMF, ELF, Mach-O, 
  MsCoff)
  have that. Even the symbolic debug info doesn't have line numbers for
  references, just for definitions.
 
  While this is true, you could scan the relocations for matching symbols,
  then use the debug information to get line numbers.  This would work for 
  all
  function calls at least.


 If the symbol is undefined, then there is no debug info for it.

There will be debug information for the call site if it is in the user's 
program.

eg

void foo();

void main()
{
   foo();
}

dmd testx -g
DMD v2.061 DEBUG
OPTLINK (R) for Win32  Release 8.00.12
Copyright (C) Digital Mars 1989-2010  All rights reserved.
http://www.digitalmars.com/ctg/optlink.html
testx.obj(testx)
 Error 42: Symbol Undefined _D5testx3fooFZv
--- errorlevel 1

objconv -dr testx.obj

Dump of file: testx.obj, type: OMF32
Checksums are zero

LEDATA, LIDATA, COMDAT and FIXUPP records:
  LEDATA: segment $$SYMBOLS, Offset 0x0, Size 0x4B
  FIXUPP:
   Direct farword 32+16 bit, Offset 0x30, group FLAT. Symbol __Dmain (T6), 
inlin
e 0x0:0x0
  COMDAT: name , Offset 0x0, Size 0xD, Attrib 0x00, Align 0, Type 0, Base 0
  FIXUPP:
   Relatv 32 bit, Offset 0x4, group FLAT. Symbol _D5testx3fooFZv (T6), 
inline 0x
1000E
  LEDATA: segment _DATA, Offset 0x0, Size 0xE
  LEDATA: segment FM, Offset 0x0, Size 0x4
  FIXUPP:
   Direct 32 bit, Offset 0x0, group FLAT. Segment _DATA (T4), inline 0x0
  LEDATA: segment $$TYPES, Offset 0x0, Size 0x16

The FIXUPP record gives Offset 0x4 for the address _D5testx3fooFZv, and the 
debug information for main will give the line number of that offset.

I wouldn't want to implement it in assembly though. 




Re: Decimal Floating Point types.

2012-10-30 Thread monarch_dodra

On Monday, 29 October 2012 at 22:49:16 UTC, H. S. Teoh wrote:

[...]

I thought it was better to use fixed-point with currency? Or at 
least,

so I've heard.


T


In most countries, if you are a bank, doing otherwise would 
actually be *illegal* ...


Re: isDroppable range trait for slicing to end

2012-10-30 Thread monarch_dodra
On Monday, 29 October 2012 at 19:20:34 UTC, Dmitry Olshansky 
wrote:

**Extract a slice, but with the explicit notion you *won't* get
back-assignability auto myNewSlice = r.extractSlice(0, 10);


Another primitive or is that UFCS in the work?


That's just UFCS, not another primitive.


Now when to use it? I'd hate to see everything turning from
a[x..y]
to
a.extractSlice(x, y)
in generic code. Just because a lot of code doesn't need a 
slice to have the exact same type.
(I'm just following the simple rule of generic programming: if 
you don't require something - avoid using it)


Yes, that's a good point.

Note that this extractSlice notion would save a bit of 
functionality
for immutable ranges which *would* have slicing, but since 
they don't

support assign, don't actually verify hasSlicing...


immutable ranges is purely a theoretical notion. (immutable 
elements are on the contrary are ubiquitous)


Not *that* theoretical when you think about it. ascii's digits 
etc are all immutable ranges. They are a bad example, because 
they are strings (ergo un-sliceable), but as a rule of thumb, any 
global container can be saved as an immutable range. For example, 
I could define first 10 integers as an immutable range. That 
range would be slice-able, but would not verify hasSlicing.



The way I see it, maybe a beter solution would be a refinement of:

*hasSlicing:
**r = r[0 .. 1]; MUST work (so infinite is out)
*hasEndSlicing
**r = r[1 .. $]; Must work (intended for infinite, or to verify 
opDollor)


To which we could add limited variants: hasLimitedSlicing and 
hasLimitedEndSlicing, which would *just* mean we can extract a 
slice, but not necessarily re-assign it.


This seems like a simple but efficient solution. Thoughts?


The issue that I still have with slicing (between to indexes) 
infinite ranges is that even on an implementation stand point, it 
makes little sense. There is little other way to implement it 
other than return this[i .. $].takeExactly(j - i); In which 
case, it would make little sense to require it as a primitive.


I'd rather have a global function in range.d, that would provide 
the implementation for any infinite range that provides 
has[Limited]EndSlicing.





opDollar questions

2012-10-30 Thread monarch_dodra

1.
I saw a pull request that claimed to fix opDollar go through 
recently. Does this mean we can now use it correctly? In both

r[0..$];
and
r[$..$];
forms?

2.
Would it be OK if I asked for a little implementation tutorial on 
opDollar, and how to correctly write one when dimensions can get 
involved? I'm unsure how it is done (I've seen both 
opDollar(size_t dim)() and opDollar(size_t dim))


3.
One last question: The possibility of using opDollar to slice a 
range to its end (in particular, infinite ranges) has been 
brought up more than once. May I request we have a normalized 
type:

struct InfDollar{}

Or something inside range.d?

This way, infinite ranges would not have to invent a new token 
themselves just for this, and simply implement auto 
opSlice(size_t i, InfDollar dol)?


4. In the context of the above question, would it be possible to 
implement opDollar simply as an enum?

enum opDollar = InfDollar.init;
Or would that be a problem?


Re: Why D is annoying =P

2012-10-30 Thread Rob T

On Tuesday, 30 October 2012 at 02:27:30 UTC, Mehrdad wrote:


I understand the problem, but it doesn't seem related to 
structs at all.


Any two attempts to compare two default-valued floats will 
fail, irrespective of whether or not they're inside structs.


True at the level of float by float comparisons, but not 
necessarilly true when struct by struct bit pattern compares are 
done.


My original understandiong of the struct type in D, is that it is 
defined as a POD which means == (without override) is done as a 
simple bit pattern compare, rather than doing a value by value 
compare of member data types. If this is the case, then two 
structs with default valued equivalent float types should compare 
equal, even though a value by value compare would compare not 
equal.


TDPL clearly states that each of the struct's members are 
supposed to

be checked for equality (see section 7.1.5.2, p. 258 - 259).


So I guess that the POD definition of struct was changed at some 
point?


If so, then how are value by value compares performed when unions 
are involved? What about when there are member pointers, and 
other structures that cannot be compared in a meaningful way by 
default?


I guess the question is if the bit pattern comparision of structs 
has sufficient justification to be worth having as the default 
behaviour, or if there is more value attempting a value by value 
comparision, even though in many situations it may be pointless 
to attempt either strategy.


In my experience, I cannot recall doing very many direct struct 
comparisons either as a complete value by value compare or as a 
full bit pattern compare, I just never found it to be all that 
useful, no matter, I can try and imagine both strategies being 
useful in some situations.


With operator overloading you can have both strategies, but only 
if bit pattern compare is the default stategy, plus with 
overloading, compare can do exactly what you want to make sense 
out of unions and pointers, etc. So we should be good as is. I 
would expect that most times no one compares structs fully by 
value or by bit pattern anyway, to do so very often is difficult 
for me to imagine, but I could be wrong.


--rt



Re: Why D is annoying =P

2012-10-30 Thread Jonathan M Davis
On Tuesday, October 30, 2012 08:14:59 Rob T wrote:
  TDPL clearly states that each of the struct's members are
  supposed to
  be checked for equality (see section 7.1.5.2, p. 258 - 259).
 
 So I guess that the POD definition of struct was changed at some
 point?

Structs never were just PODs in D. They have _way_ more capabilities to them 
than that.

If you want bitwise comparison, then use the is operator. That's what it's 
for.

- Jonathan M Davis


Re: Why D is annoying =P

2012-10-30 Thread Rob T
On Tuesday, 30 October 2012 at 07:22:30 UTC, Jonathan M Davis 
wrote:

On Tuesday, October 30, 2012 08:14:59 Rob T wrote:

 TDPL clearly states that each of the struct's members are
 supposed to
 be checked for equality (see section 7.1.5.2, p. 258 - 259).

So I guess that the POD definition of struct was changed at 
some

point?


Structs never were just PODs in D. They have _way_ more 
capabilities to them

than that.


I do understand what you are saying, however the docs in here 
describe structs as POD's, with the capabilities you mention.

http://dlang.org/glossary.html#pod



If you want bitwise comparison, then use the is operator. 
That's what it's

for.


Where can I find an up-to-date language reference? What I'm 
reading does not seem to be up to date or complete. For example, 
I never saw mention of the is operator until now.


--rt



Re: To avoid some linking errors

2012-10-30 Thread Walter Bright

On 10/29/2012 11:08 PM, Daniel Murphy wrote:

void foo();


There will be no line information for the above.

 void main()
 {
foo();

For this, yes, but that is not what is being asked for.


Re: Why D is annoying =P

2012-10-30 Thread Jonathan M Davis
On Tuesday, October 30, 2012 08:37:47 Rob T wrote:
 Where can I find an up-to-date language reference? What I'm
 reading does not seem to be up to date or complete. For example,
 I never saw mention of the is operator until now.

It's in the online docs:

http://dlang.org/expression.html

It's in the Identity Expressions section. The best reference is TDPL though.

- Jonathan M Davis


Re: To avoid some linking errors

2012-10-30 Thread Daniel Murphy
Walter Bright newshou...@digitalmars.com wrote in message 
news:k6o0fd$25mm$1...@digitalmars.com...
 On 10/29/2012 11:08 PM, Daniel Murphy wrote:
 void foo();

 There will be no line information for the above.

  void main()
  {
 foo();

 For this, yes, but that is not what is being asked for.

It isn't?  Oops, my bad. 




Re: Has anyone built DStep for Win?

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 05:08, Nick Sabalausky wrote:

Like the subject says...


I compiled it on Windows before the initial release but optlink refused 
to cooperate and I basically gave up. I guess I should try the other 
linker, ulink or what it's called. Or the new DMD with COFF backend.


--
/Jacob Carlborg


Re: Imports with versions

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 01:51, bearophile wrote:

There are some updated on the Java-like language Ceylon:

http://ceylon-lang.org/blog/2012/10/29/ceylon-m4-analytical-engine/


One of the features of Ceylon that seems interesting are the module
imports:

http://ceylon-lang.org/documentation/1.0/reference/structure/module/#descriptor



An example:

doc An example module.
module com.example.foo 1.2.0 {
 import com.example.bar 3.4.1
 import org.example.whizzbang 0.5;
}


I think it helps avoid version troubles.

A possible syntax for D:

import std.random(2.0);
import std.random(2.0+);


It probably wouldn't be a bad idea to have this but wouldn't it be 
better to have a package manger to handle this.


--
/Jacob Carlborg


Re: To avoid some linking errors

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 02:58, Brad Roberts wrote:


If someone wanted to take on an ambitious task, one of the key problems
with output munging is the parseability of the output (which applies to
the compiler, linker, etc.. all the compiler chain tools).  Few (none?) of
them output text that's designed for parsability (though some make it
relatively easy).  It would be interesting to design a structured format
and write scripts to sit between the various components to handle adapting
the output.

Restated via an example:

today:
   compiler invokes tools and just passes on output

ideal (_an_ ideal, don't nitpick):
   compiler invokes tool which returns structured output and uses that

intermediate that's likely easier to achieve:
   compiler invokes script that invokes tool (passing args) and fixes
output to match structured output


Even better, in my opinion: Both the linker and compiler is built as a 
library. The compiler just calls a function from the linker library, 
like any other function, to do the linking. The linker uses the 
appropriate exception handling mechanism as any other function would. No 
need for tools calling each other and parsing output data.


--
/Jacob Carlborg


Re: To avoid some linking errors

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 03:48, Walter Bright wrote:


No need for :o), it's a fair question.

The linking process itself is pretty simple. The problems come from
designers who can't resist making things as complicated as possible.
Just look at the switches for the various linkers, and what they purport
to do. Then, look at all the complicated file formats it deals with:

res files
def files
linker script files
dwarf
codeview
magic undocumented formats
pe files
shared libraries
eh formats

And that's just the start.


The linker should not be directly built into the compiler. It should be 
build as a library, like the rest of the tool chain. The compiler then 
calls a function in the linker library to do the linking.


See one of my other replies:

http://forum.dlang.org/thread/jxiupltnfbmbvyxhe...@forum.dlang.org?page=5#post-k6o4en:242bld:241:40digitalmars.com

--
/Jacob Carlborg


Re: Make [was Re: SCons and gdc]

2012-10-30 Thread Rob T

On Tuesday, 30 October 2012 at 01:22:17 UTC, H. S. Teoh wrote:
To each his own, but I honestly don't see what's so difficult 
about

this:

# src/SConscript
Import('env')
env.Program('mydprogram', ['main.d', 'abc.d', 'def.cc'])

# lib1/SConscript
Import('env')
env.Library('mylib1', ['mod1.d', 'mod2.d'])

# lib2/SConscript
Import('env')
env.Library('mylib2', ['mod2.d', 'mod3.d'])

# SConstruct
objdir = 'build'
env = Environment()
Export(env)
env.SConscript('src/SConscript',  build_dir=objdir)
env.SConscript('lib1/SConscript', build_dir=objdir)
env.SConscript('lib2/SConscript', build_dir=objdir)

Main program in src/, two libraries in lib1, lib2, and 
everything builds
in build/ instead of the respective source trees. No problem. I 
even

threw in a C++ file for kicks.


You are right, Make cannot do something like that in a reasonable 
way, and it looks great.


You are describing one main project, with sub-level projects 
inside, so the build dump is still going into the project tree. 
This may work for some people, but it is frustrating for someone 
who wishes to dump the build files outside of the project tree. I 
guess there's just no solution to do different at this time. Not 
what I want, but not a terminal problem either.


Also scons has no built-in ability to scan subfolders for source 
files. You can manually specify sub-level source files, but 
that's unreasonable to do for large projects. You can write 
custom Python code to scan subfolders, but that is a lot more 
complicated than it should be. Do you know of a decent solution 
to scan sub-folders?


Scons does look like a very powerful tool, and I really do 
appreciate your input and the other posters as well. I am 
determined not to continue with Make, so maybe I will have to 
keep trying to get scons to do what I want.


One more question: I am wondering how the scons D support locates 
dependencies from the imports specfied in the source files? So 
far I have not been able to get automatic dependency inclusions 
to work, so I end up manually specifying the import files. With 
C++ and Make, I could get gcc to scan source files for the 
#include files, and dump the list to a dependency file (.dep), 
and then I could include the .dep file(s) into the make process. 
Can scons with D support do something similar, or deal with it 
better?


--rt



Re: SCons and gdc

2012-10-30 Thread Russel Winder
On Tue, 2012-10-23 at 14:58 -0700, H. S. Teoh wrote:
[…]
 Well, dmd tends to work best when given the full list of D files, as
 opposed to the C/C++ custom of per-file compilation. (It's also faster
 that way---significantly so.) The -op flag is your friend when it comes
 to using dmd with multi-folder projects.
 
 And I just tried: gdc works with multiple files too. I'm not sure how
 well it handles a full list of D files, though, if some of those files
 may not necessarily be real dependencies.

So perhaps the D tooling for SCons should move more towards the Java
approach than the C/C++/Fortran approach, i.e. a compilation step is a
single one depending only on source files and generating a known set of
outputs (which is easier than Java since it can generate an almost
untold number of output files, Scala is even worse).

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Make [was Re: SCons and gdc]

2012-10-30 Thread Russel Winder
On Tue, 2012-10-30 at 00:19 +0100, Rob T wrote:
[…]
 I definitely do not like Make. The scripts are made out of 
 garbage, and maintaining garbage just produces more waste. 
 Unfortunately for me, my attempts to make use of scons is not 
 encouraging. It may be better than Make, but not enough for me to 
 settle down with it.

I would suggest you haven't given SCons long enough to get into the
SCons idioms, but you have to go with the route that is most comfortable
for you.

 The two problems I mentioned were encountered almost immediately. 
 These are inability to scan subfolders recursively, and inability 
 to build to a level above the source folder. I don't think that 
 neither requirement has anything to do with thinking in terms of 
 Make. It could be that solving these two deficiencies may be 
 enough to keep me going with scons, I don't know.

I do not see why you need to scan subfolders recursively. This is a
traditional Make approach. SCons does things differently even for highly
hierarchical systems. In particular use of SConscript files handles
everything. So I do not think this is a SCons problem.

Of course if you have to do things recursively then os.walk is the
function you need.

[…]

 I don't think it's a bug, because it's actually documented as a 
 feature. It may however be a bug in terms of the assumptions 
 about how applications should be built.

As noted in the exchanges to which I cc you in everything I sent, SCons
does insist on having all directories in use under the directory with
the SConstruct – unless you have with Default. On reflection I am now
much less worried about this that I was at first.  Out of source tree I
find essential in any build, SCons does this well, Make less so. Out of
project tree builds I am now not really worried about.

[…]
 I have only used Make, and as bad as it is, at least I can scan 
 subfolders with one built-in command.

But why do you want to do this?  Why doesn't os.walk achieve what you
need with SCons?

[…]
 Scons is far too rigid with the assumptions it makes, and IMO 
 some of the assumptions are plain wrong.

I disagree. I find Make totally rigid and unyielding. Not to mention
rooted in 1977 assumptions of code.

 For example, building to a location out of the source tree has 
 the obvious advantage that your source tree remains a source 
 tree. I don't understand how anyone can consider this unusual or 
 not necessary. If a source tree is to be a tree containing source 
 code, then recursive scanning and building out of the tree is an 
 essential requirement.

I always build out of source tree using SCons, to do otherwise is
insanity, yes Autotools I am looking at you. However I have a source
directory in my project directory and can then have many build
directories in the project directory. Building a project for multiple
platforms makes this essential. SCons supports this very well with the
Variant system.

[…]
 You are correct, only Python, which on a Linux system is normally 
 installed by default. I was refering to the need to manually 
 build scons from from a source repository in order to get latest 
 D support. I know I'm in the bleeding edge zone when it comes to 
 D, so a certain amount of hacking is needed, but I'd like to 
 minimize it as much as possible.

You don't need to install SCons to use it, you can use it from a clone
directly using the bootstrap system.  I have an alias 

alias scons='python 
/home/users/russel/Repositories/Mercurial/Masters/SCons_D_Tooling/bootstrap.py'

[…]
  Or fix SCons?
 
 I thought of that, however in order to fix scons, I would have to 
 learn a lot about scons, and also learn Python. The flaws that I 
 see with scons are so basic, I probably would not fit in with the 
 scons culture, so I see nothing but pain in trying to fix scons. 
 I'm also learning D, and would rather spend more of my time 
 learning D than something else. My only interest with scons is 
 for using it, not fixing it, and I have no interest in learning 
 Python.

Please stick with the I don't want to learn Python as your reason for
not working with SCons. That is fine. I have no problem with that.

I do have a problem with you saying the flaws with SCons are so basic.
This is just FUD from a person who hasn't really learnt the SCons way of
doing things.

So the resolution here is to stop mud-slinging at SCons and say I am
not going to use SCons because it involve working with Python and I
don't want to do that. Then people can respect your position.

[…]
  (*) Think SCons → Python → Monty Python.
 
 That's how I view most of what is going on in programming land.

:-)

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally 

Re: Make [was Re: SCons and gdc]

2012-10-30 Thread Russel Winder
On Tue, 2012-10-30 at 10:08 +0100, Rob T wrote:
[…]

 Also scons has no built-in ability to scan subfolders for source 
 files. You can manually specify sub-level source files, but 
 that's unreasonable to do for large projects. You can write 
 custom Python code to scan subfolders, but that is a lot more 
 complicated than it should be. Do you know of a decent solution 
 to scan sub-folders?

Using os.walk is quite natural in SCons since SCons is just a Python
internal DSL. SConstruct and SConscript are internal DSL files (not
external DSL as Makefiles are), thus they are just Python programs.

 Scons does look like a very powerful tool, and I really do 
 appreciate your input and the other posters as well. I am 
 determined not to continue with Make, so maybe I will have to 
 keep trying to get scons to do what I want.

Mayhap then what you need to do is look at Parts. This is Jason Kenny's
superstructure over SCons to deal with huge projects scattered across
everywhere. This is the build framework Intel uses for their stuff.
Intel have more and bigger C and C++ projects than I think anyone else
around

 One more question: I am wondering how the scons D support locates 
 dependencies from the imports specfied in the source files? So 
 far I have not been able to get automatic dependency inclusions 
 to work, so I end up manually specifying the import files. With 
 C++ and Make, I could get gcc to scan source files for the 
 #include files, and dump the list to a dependency file (.dep), 
 and then I could include the .dep file(s) into the make process. 
 Can scons with D support do something similar, or deal with it 
 better?

Now this is a whole different issue!

I have no idea. If it is a problem it needs fixing.

The general philosophy in SCons is to have scanners that find the
dependencies. In C and C++ this is the awful #include. In D it's import.
The code should create a graph of the dependencies that can be walked to
create the list of inputs. There is a bit of code in the D tooling that
is supposed to do this. If it doesn't then we need a bug report and
preferably a small project exhibiting the problem that can be entered
into the test suite — SCons development is obsessively TDD.

I can only do stuff for Linux and OS X, I cannot do anything for
Windows, and Windows is the big problem for the SCons D tooling at the
moment which is why it has not got merged into the SCons mainline for
release in 2.3.0.



A plea for help: in order for SCons to have sane support for D, it needs
people to step up and actively assist evolve the tools either by
providing small test projects exhibiting fails, or running the tests on
Windows and providing patches for fails.

Currently I am not working in D, just in Python, Java, Groovy, Scala,
Ceylon, Kotlin, so I have no D code activity I can use to evolve the D
support in SCons. But if the D community can provide test cases, I can
get SCons evolved to do the needful.  But only on Linux and OS X, I
definitely need a collaborator interested in working on Windows.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: DMD on Haiku?

2012-10-30 Thread Iain Buclaw
On 30 October 2012 05:41, Alex Rønne Petersen a...@lycus.org wrote:
 On 29-10-2012 23:36, Isak Andersson wrote:

 Hello D-folks!

 I was just wondering if it would be possible to make DMD build out of
 the box for Haiku (haiku-os.org) with the source from the official DMD
 repo. Haiku is pretty darn POSIX compliant so the actual porting isn't
 much of a problem. DMD has ran on Haiku before a while ago and shouldn't
 have any problem doing it now. From what I hear from the Haiku community
 it was just to add a bunch of ifeq Haiku and stuff to make it build and
 run fine.

 What I want though is to get these things in to the main source of DMD,
 applying patches and stuff like that is a pain, it is so much better to
 just be able to clone and build without problems. So what I wanted to
 ask is: would Digital Mars accept a pull request to make DMD build on
 Haiku to their main branch on Github? I just wanted to know for sure
 before I go ahead and fork DMD to do this.

 Cheers!


 Do note that getting DMD, druntime, and phobos running on Haiku will take a
 lot of porting work. To name a few things:

 * All preprocessor #ifs in DMD for POSIX need to have Haiku added.
 * In all likelihood, DMD's port wrapper needs updating for Haiku.
 * druntime's POSIX headers all need to be updated for Haiku.
 * Any Haiku-specific header modules need to be added to druntime.
 * DMD, druntime, and phobos all need to be tested and debugged.
 * Probably other things I forgot.

 (This is all assuming Haiku is POSIX-compliant. If it isn't, it's going to
 be even more work, since most of druntime has two code paths: One for
 Windows and one for POSIX.)


Next, we'll be making dlang.org html, xhtml-strict, and haiku compliant... :o)

-- 
Iain Buclaw

*(p  e ? p++ : p) = (c  0x0f) + '0';


Re: To avoid some linking errors

2012-10-30 Thread deadalnix

Le 28/10/2012 21:59, Walter Bright a écrit :

On 10/28/2012 1:34 PM, deadalnix wrote:

As Andrei stated, the linker's native language is encrypted klingon.


It baffles me that programmers would find undefined symbol hard to
make sense of.


Undefined symbol is the clear part of the message.

The mangled madness that follow that is supposed to help finding what 
symbol we are dealing with is the unclear part.


Not to mention ddemangle is unable to demangle many D mangling.


Re: DMD on Haiku?

2012-10-30 Thread Isak Andersson

On Tuesday, 30 October 2012 at 03:40:50 UTC, Brad Roberts wrote:

On 10/29/2012 4:27 PM, Isak Andersson wrote:
Well, I would say that I am pretty willing to do both those 
things. At least if I have the knowledge to do it! I'm not a
100% clear on what the second requirement means. Having a box 
running 24/7 that can run automated tests at any time? Or
just running the tests occationally (like once or twice a week 
or so, or even just in time for every new DMD release)?


Send me email about #2.  The short form is that online 24/7 
(though not building all the time, just on-checkin) is the
preference, but there's room for less frequent to be 
acceptable.  It's pretty easy to setup the tester and it's 
pretty
low maintenance.  I'd prefer to have access to the account that 
runs it (or easiest just let me run it) to be able to

update the tester code periodically.

Later,
Brad


Sure, what's your Email though? I can't seem to figure out where 
I can find it on this forum thing :P I suppose I could put some 
money down to build a cheap machine that can be running all the 
time. Haiku isn't multi user yet so you'd have full access via 
ssh (if Haiku supports that, at least it has the ssh command 
afaik, but that's for connecting).


Re: DMD on Haiku?

2012-10-30 Thread Isak Andersson
Do note that getting DMD, druntime, and phobos running on Haiku 
will take a lot of porting work. To name a few things:


* All preprocessor #ifs in DMD for POSIX need to have Haiku 
added.
* In all likelihood, DMD's port wrapper needs updating for 
Haiku.

* druntime's POSIX headers all need to be updated for Haiku.
* Any Haiku-specific header modules need to be added to 
druntime.

* DMD, druntime, and phobos all need to be tested and debugged.
* Probably other things I forgot.

(This is all assuming Haiku is POSIX-compliant. If it isn't, 
it's going to be even more work, since most of druntime has two 
code paths: One for Windows and one for POSIX.)


Yep, it seems like it has ran well after adding the 
pre-proccessor stuff. But of course it needs testing and all 
that. Haiku is very POSIX compliant although not 100%, but for 
most needs it is fine.


Re: DMD on Haiku?

2012-10-30 Thread Paulo Pinto

On Tuesday, 30 October 2012 at 12:11:41 UTC, Isak Andersson wrote:
Do note that getting DMD, druntime, and phobos running on 
Haiku will take a lot of porting work. To name a few things:


* All preprocessor #ifs in DMD for POSIX need to have Haiku 
added.
* In all likelihood, DMD's port wrapper needs updating for 
Haiku.

* druntime's POSIX headers all need to be updated for Haiku.
* Any Haiku-specific header modules need to be added to 
druntime.

* DMD, druntime, and phobos all need to be tested and debugged.
* Probably other things I forgot.

(This is all assuming Haiku is POSIX-compliant. If it isn't, 
it's going to be even more work, since most of druntime has 
two code paths: One for Windows and one for POSIX.)


Yep, it seems like it has ran well after adding the 
pre-proccessor stuff. But of course it needs testing and all 
that. Haiku is very POSIX compliant although not 100%, but for 
most needs it is fine.


Based on my experience POSIX compliance is like any standard.

You end up getting lots of #ifdef for each POSIX system anyway. 
The only people that think POSIX is a standard without any 
issues, only know GNU/Linux.


One thing missing from the list which costs a lot of effort, is 
code generation.


Based on my toy Solaris experience with DMD, I think it is easier 
to use LDC or GDC for bringing D to other platforms.


--
Paulo


Re: To avoid some linking errors

2012-10-30 Thread Paulo Pinto

On Tuesday, 30 October 2012 at 08:53:49 UTC, Jacob Carlborg wrote:

On 2012-10-30 03:48, Walter Bright wrote:


No need for :o), it's a fair question.

The linking process itself is pretty simple. The problems come 
from
designers who can't resist making things as complicated as 
possible.
Just look at the switches for the various linkers, and what 
they purport
to do. Then, look at all the complicated file formats it deals 
with:


res files
def files
linker script files
dwarf
codeview
magic undocumented formats
pe files
shared libraries
eh formats

And that's just the start.


The linker should not be directly built into the compiler. It 
should be build as a library, like the rest of the tool chain. 
The compiler then calls a function in the linker library to do 
the linking.


See one of my other replies:

http://forum.dlang.org/thread/jxiupltnfbmbvyxhe...@forum.dlang.org?page=5#post-k6o4en:242bld:241:40digitalmars.com


This is most likely the approach taken by Delphi and .NET.



Re: Imports with versions

2012-10-30 Thread Paulo Pinto

On Tuesday, 30 October 2012 at 08:44:48 UTC, Jacob Carlborg wrote:

On 2012-10-30 01:51, bearophile wrote:

There are some updated on the Java-like language Ceylon:

http://ceylon-lang.org/blog/2012/10/29/ceylon-m4-analytical-engine/


One of the features of Ceylon that seems interesting are the 
module

imports:

http://ceylon-lang.org/documentation/1.0/reference/structure/module/#descriptor



An example:

doc An example module.
module com.example.foo 1.2.0 {
import com.example.bar 3.4.1
import org.example.whizzbang 0.5;
}


I think it helps avoid version troubles.

A possible syntax for D:

import std.random(2.0);
import std.random(2.0+);


It probably wouldn't be a bad idea to have this but wouldn't it 
be better to have a package manger to handle this.


.NET and OSGi are similar approaches, because they rely on 
dynamic linking.


The package manager only works with static linking, otherwise you 
might get into the situation it compiles fine, but runs into 
version conflicts when running. Common scenario to anyone doing 
Java development.


One issue both systems have problems solving is what to do when 
third party libraries have conflicting version requirements.


--
Paulo


Re: assert(false, ...) doesn't terminate program?!

2012-10-30 Thread Don Clugston

On 29/10/12 18:38, Walter Bright wrote:

On 10/29/2012 7:51 AM, Don Clugston wrote: On 27/10/12 20:39, H. S.
Teoh wrote:
  On Sat, Oct 27, 2012 at 08:26:21PM +0200, Andrej Mitrovic wrote:
  On 10/27/12, H. S. Teoh hst...@quickfur.ath.cx wrote:
   writeln(how did the assert not trigger??!!);// how
did we get
  here?!
 
  Maybe related to -release?
  [...]
 
  Haha, you're right, the assert is compiled out because of -release.
 
  But I disassembled the code, and didn't see the auto x = 1/toInt()
  either. Is the compiler optimizing that away?
 
  Yes, and I don't know on what basis it thinks it's legal to do that.

Because x is a dead assignment, and so the 1/ is removed.




Divide by 0 faults are not considered a side effect.


Ah, that's interesting, I didn't know that.



I think the code would be

better written as:

 if (toInt() == 0) throw new Error();

If you really must have a divide by zero fault,

 if (toInt() == 0) divideByZero();

where:

 void divideByZero()
 {
  static int x;
  *cast(int*)0 = x / 0;
 }


And that works because writes to 0 _are_ considered a side-effect?
Is that guaranteed to work?




Re: DMD on Haiku?

2012-10-30 Thread Alex Rønne Petersen

On 30-10-2012 14:46, Isak Andersson wrote:

Based on my experience POSIX compliance is like any standard.

You end up getting lots of #ifdef for each POSIX system anyway. The
only people that think POSIX is a standard without any issues, only
know GNU/Linux.

One thing missing from the list which costs a lot of effort, is code
generation.

Based on my toy Solaris experience with DMD, I think it is easier to
use LDC or GDC for bringing D to other platforms.

--
Paulo


Yeah, it seems like POSIX kind of failed in the sense that you can't
just have a simple posix makefile that works for any posix compliant os.


I direct you to the POSIX makefiles of DMD, druntime, and phobos. ;)



The problem with using those is that most D libraries are built with DMD
in mind, like Vibe.d. DMD is pretty much setting the standard for how D
behaves.


--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: DMD on Haiku?

2012-10-30 Thread Paulo Pinto

On Tuesday, 30 October 2012 at 13:46:22 UTC, Isak Andersson wrote:

Based on my experience POSIX compliance is like any standard.

You end up getting lots of #ifdef for each POSIX system 
anyway. The only people that think POSIX is a standard without 
any issues, only know GNU/Linux.


One thing missing from the list which costs a lot of effort, 
is code generation.


Based on my toy Solaris experience with DMD, I think it is 
easier to use LDC or GDC for bringing D to other platforms.


--
Paulo


Yeah, it seems like POSIX kind of failed in the sense that you 
can't just have a simple posix makefile that works for any 
posix compliant os.


The problem with using those is that most D libraries are built 
with DMD in mind, like Vibe.d. DMD is pretty much setting the 
standard for how D behaves.


Yeah there are two main issues with POSIX:

- versions, which means you never know how compliant a given 
system is


- like C, the standard allows for implementation defined behaviors

In the end #ifdef all the way, no different than using a 
non-POSIX system.


--
Paulo


Re: DMD on Haiku?

2012-10-30 Thread Isak Andersson

Based on my experience POSIX compliance is like any standard.

You end up getting lots of #ifdef for each POSIX system anyway. 
The only people that think POSIX is a standard without any 
issues, only know GNU/Linux.


One thing missing from the list which costs a lot of effort, is 
code generation.


Based on my toy Solaris experience with DMD, I think it is 
easier to use LDC or GDC for bringing D to other platforms.


--
Paulo


Yeah, it seems like POSIX kind of failed in the sense that you 
can't just have a simple posix makefile that works for any posix 
compliant os.


The problem with using those is that most D libraries are built 
with DMD in mind, like Vibe.d. DMD is pretty much setting the 
standard for how D behaves.


Re: To avoid some linking errors

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 09:51:34AM +0100, Jacob Carlborg wrote:
 On 2012-10-30 02:58, Brad Roberts wrote:
[...]
 today:
compiler invokes tools and just passes on output
 
 ideal (_an_ ideal, don't nitpick):
compiler invokes tool which returns structured output and uses that
 
 intermediate that's likely easier to achieve:
compiler invokes script that invokes tool (passing args) and fixes
 output to match structured output
 
 Even better, in my opinion: Both the linker and compiler is built
 as a library. The compiler just calls a function from the linker
 library, like any other function, to do the linking. The linker uses
 the appropriate exception handling mechanism as any other function
 would. No need for tools calling each other and parsing output data.
[...]

+1. This is 2012, we have developed the concept of libraries, why are we
still trying to parse output between two tools (compiler  linker) that
are so closely intertwined? Not the mention the advantages of having the
compiler and linker as a library: reusability in IDEs, adaptability to
*runtime* compilation, and a host of other powerful usages.


T

-- 
A one-question geek test. If you get the joke, you're a geek: Seen on a 
California license plate on a VW Beetle: 'FEATURE'... -- Joshua D. Wachs - 
Natural Intelligence, Inc.


Re: To avoid some linking errors

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 12:22:15PM +0100, deadalnix wrote:
 Le 28/10/2012 21:59, Walter Bright a écrit :
 On 10/28/2012 1:34 PM, deadalnix wrote:
 As Andrei stated, the linker's native language is encrypted klingon.
 
 It baffles me that programmers would find undefined symbol hard to
 make sense of.
 
 Undefined symbol is the clear part of the message.
 
 The mangled madness that follow that is supposed to help finding
 what symbol we are dealing with is the unclear part.
 
 Not to mention ddemangle is unable to demangle many D mangling.

Yeah, what's up with that? I looked briefly at the code, and there's a
comment that says that it only demangles a certain subset of symbols
because the others are useless when it comes to ABI's, or something
like that. I have no idea what that means and why it matters. What's
wrong with demangle() demangling *everything*? Isn't that what it's
supposed to be doing anyway?


T

-- 
The early bird gets the worm. Moral: ewww...


Re: Why D is annoying =P

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 08:57:01AM +0100, Tobias Pankrath wrote:
 Where can I find an up-to-date language reference? What I'm
 reading does not seem to be up to date or complete. For example, I
 never saw mention of the is operator until now.
 
 --rt
 
 It's hard to find because it can not be overloaded.
 
 Look here: http://dlang.org/expression.html#IdentityExpression
 In TDPL it's explained as well.

Sigh, one of these days I'm gonna have to rewrite many of these pages.
I find them very hard to navigate and very unfriendly to newbies,
because very basic information (like what 'is' is) is buried deep in
long verbose infodumps of diverse language features, with no indication
at all which are fundamental concepts and which are just details. It's
virtually impossible to find what you're looking for unless you already
know what it is.

TDPL lays things out in a much saner fashion, but how many newbies
actually own the book? We need the online docs to be just as
newbie-friendly.


T

-- 
MSDOS = MicroSoft's Denial Of Service


Re: Imports with versions

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 13:25, Paulo Pinto wrote:


.NET and OSGi are similar approaches, because they rely on dynamic linking.

The package manager only works with static linking, otherwise you might
get into the situation it compiles fine, but runs into version conflicts
when running. Common scenario to anyone doing Java development.


I would say, in this case, that you haven't specified the versions 
correctly.



One issue both systems have problems solving is what to do when third
party libraries have conflicting version requirements.


Yeah, this can be a problem.

--
/Jacob Carlborg


Re: Why D is annoying =P

2012-10-30 Thread Tobias Pankrath
Sigh, one of these days I'm gonna have to rewrite many of these 
pages.
I find them very hard to navigate and very unfriendly to 
newbies,
because very basic information (like what 'is' is) is buried 
deep in
long verbose infodumps of diverse language features, with no 
indication
at all which are fundamental concepts and which are just 
details. It's
virtually impossible to find what you're looking for unless you 
already

know what it is.

TDPL lays things out in a much saner fashion, but how many 
newbies

actually own the book? We need the online docs to be just as
newbie-friendly.



I agree, that the online docs are insufficient for learning the
language.
But that's the case for phobos, too. Both are just a listings of
what is there and don't give you any overview of what design
decisions were made and what implications they have.

Just take a look std.container.

I hope that Ali Çehreli efforts will be midterm solution at
least for the language docs. Maybe he should credits by linking
from the homepage to his book.




What is the use case for this weird switch mecanism

2012-10-30 Thread deadalnix

Today, I noticed by digging into D details the following construct :

switch(foo) {
statement;
case A:
// Stuffs . . .

// Other cases.
default:
// Stuffs . . .
}

What the hell statement is supposed to do ? And what is the use case for 
this ?


Re: SCons and gdc

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 09:18:28AM +, Russel Winder wrote:
 On Tue, 2012-10-23 at 14:58 -0700, H. S. Teoh wrote:
 […]
  Well, dmd tends to work best when given the full list of D files, as
  opposed to the C/C++ custom of per-file compilation. (It's also
  faster that way---significantly so.) The -op flag is your friend
  when it comes to using dmd with multi-folder projects.
  
  And I just tried: gdc works with multiple files too. I'm not sure
  how well it handles a full list of D files, though, if some of those
  files may not necessarily be real dependencies.
 
 So perhaps the D tooling for SCons should move more towards the Java
 approach than the C/C++/Fortran approach, i.e. a compilation step is a
 single one depending only on source files and generating a known set
 of outputs (which is easier than Java since it can generate an almost
 untold number of output files, Scala is even worse).
[...]

That's not a bad idea. I also noticed that gdc tends to produce smaller
executables when compiling in this way (I'm not sure why -- are
identical template instances not getting merged when compiling
separately?).


T

-- 
GEEK = Gatherer of Extremely Enlightening Knowledge


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 05:16:48PM +0100, deadalnix wrote:
 Today, I noticed by digging into D details the following construct :
 
 switch(foo) {
 statement;
 case A:
 // Stuffs . . .
 
 // Other cases.
 default:
 // Stuffs . . .
 }
 
 What the hell statement is supposed to do ? And what is the use case
 for this ?

That's weird. I just did a quick test; apparently statement is never
run. I've no idea why it's allowed or what it's for.


T

-- 
Debian GNU/Linux: Cray on your desktop.


Re: Make [was Re: SCons and gdc]

2012-10-30 Thread Rob T

On Tuesday, 30 October 2012 at 09:42:50 UTC, Russel Winder wrote:
I would suggest you haven't given SCons long enough to get into 
the
SCons idioms, but you have to go with the route that is most 
comfortable

for you.


You are right, I'll shut up and keep at it. I get frustrated 
sometimes, but that's no reason to vent in here. Sorry.


I do not see why you need to scan subfolders recursively. This 
is a
traditional Make approach. SCons does things differently even 
for highly
hierarchical systems. In particular use of SConscript files 
handles

everything. So I do not think this is a SCons problem.


The only reason is to automate the construction of a list of 
source files for building. I'm used to using automated build 
scripts, which require only a minimal of manual input. For 
example, if I add a new source file to the mix, then I am 
expecting to not have to modify a SConstruct file to include it.


I see mention of placing a SConsript file in each subdir, but 
that's a fair amount of manual overhead to bother with. so there 
must be another way?


Of course if you have to do things recursively then os.walk is 
the

function you need.


The solutions I looked at did not mention os.walk, so I'll have a 
look. Thanks for the tip.


As noted in the exchanges to which I cc you in everything I 
sent, SCons
does insist on having all directories in use under the 
directory with
the SConstruct – unless you have with Default. On reflection 
I am now
much less worried about this that I was at first.  Out of 
source tree I
find essential in any build, SCons does this well, Make less 
so. Out of

project tree builds I am now not really worried about.


Yes, I see I can build out of the source tree, that's half the 
battle solved, but it still insists on building in the project 
tree, which I was hoping to also do away with. There's a 
disadvantage for me doing it in this way, so it's something of a 
step backwards for me (in terms of organizing things), which I'd 
rather not have to do, hence the fustration I've expressed. There 
must be a way to solve it somehow.


I disagree. I find Make totally rigid and unyielding. Not to 
mention

rooted in 1977 assumptions of code.


Yes I agree that Make sucks, and I hope I won't offend anyone by 
saying that. ;)


You don't need to install SCons to use it, you can use it from 
a clone

directly using the bootstrap system.  I have an alias

alias scons='python 
/home/users/russel/Repositories/Mercurial/Masters/SCons_D_Tooling/bootstrap.py'


Sounds great, but my lack of Python expertise means that I do not 
fully understanding how this will work for me. I'll diginto it ...


Thanks for the input.

--rt



Status of Decimal Floating Point Module

2012-10-30 Thread Paul D. Anderson
There have been a couple of mentions of the decimal module lately 
so I thought I'd bring everyone up to speed. The short version is 
that it's probably at an alpha stage of development. If anyone 
wants to download it and try it out I'd appreciate the feedback. 
See below.


The software is an implementation of the General Decimal 
Arithmetic Specification 
(http://speleotrove.com/decimal/decarith.pdf)


Current Status:

Support for 4 different decimal types:
1) decimal32
2) decimal64
3) decimal128
4) big decimal -- arbitrary precision, but user must specify 
max coefficient and max exponent.


Support for all rounding modes called out in the specification.

Support for all arithmetic operations called out in the 
specification.


Notes:

I just finished implementing the decimal128 type. This was the 
last major hurdle. (It required an integer128 type so if you're 
interested that is also available, but only an unsigned version 
is working.)


The remaining work is cleanup, improving tests, improving docs, 
working through the TODOs, improving the exp, log and power 
functions, and implementing trig functions (which is a good test 
of basic arithmetic).


Feel free to download it from 
https://github.com/andersonpd/decimal and try it out.


I'll be happy to discuss the design, implementation, or anything 
else about it either here in this forum or offline by e-mail.


Paul




Re: To avoid some linking errors

2012-10-30 Thread Brad Roberts
On 10/30/2012 7:46 AM, H. S. Teoh wrote:
 On Tue, Oct 30, 2012 at 09:51:34AM +0100, Jacob Carlborg wrote:
 On 2012-10-30 02:58, Brad Roberts wrote:
 [...]
 today:
   compiler invokes tools and just passes on output

 ideal (_an_ ideal, don't nitpick):
   compiler invokes tool which returns structured output and uses that

 intermediate that's likely easier to achieve:
   compiler invokes script that invokes tool (passing args) and fixes
 output to match structured output

 Even better, in my opinion: Both the linker and compiler is built
 as a library. The compiler just calls a function from the linker
 library, like any other function, to do the linking. The linker uses
 the appropriate exception handling mechanism as any other function
 would. No need for tools calling each other and parsing output data.
 [...]
 
 +1. This is 2012, we have developed the concept of libraries, why are we
 still trying to parse output between two tools (compiler  linker) that
 are so closely intertwined? Not the mention the advantages of having the
 compiler and linker as a library: reusability in IDEs, adaptability to
 *runtime* compilation, and a host of other powerful usages.
 
 
 T
 

I'm all for idealistic views, but neither of those matches reality in any 
meaningful way.  What I outlined is actually
practical.


Re: Make [was Re: SCons and gdc]

2012-10-30 Thread Rob T

On Tuesday, 30 October 2012 at 09:56:08 UTC, Russel Winder wrote:
Mayhap then what you need to do is look at Parts. This is Jason 
Kenny's
superstructure over SCons to deal with huge projects scattered 
across
everywhere. This is the build framework Intel uses for their 
stuff.
Intel have more and bigger C and C++ projects than I think 
anyone else

around


For the record, my projects are not very small but also not very 
big (whatever that means), mid size I suppose? I'll have a look 
at Parts, but I have a hard case of Bleeding Edge Syndrome, and I 
may not survive.



Now this is a whole different issue!

I have no idea. If it is a problem it needs fixing.

The general philosophy in SCons is to have scanners that find 
the
dependencies. In C and C++ this is the awful #include. In D 
it's import.
The code should create a graph of the dependencies that can be 
walked to
create the list of inputs. There is a bit of code in the D 
tooling that
is supposed to do this. If it doesn't then we need a bug report 
and
preferably a small project exhibiting the problem that can be 
entered

into the test suite — SCons development is obsessively TDD.


So far I've had no indication that it's working. Perhaps it is 
only working if the import files are not located in subfolders?


Will it account for something like this?

import a.b;

There must be something you did with the D support to give scons 
the ability to scan for imports. I know that #include can have 
sub-folders, which is common to see, such as


#include a/b.h

So scons must be able to include dependencies located in 
sub-folders.



I can only do stuff for Linux and OS X, I cannot do anything for
Windows, and Windows is the big problem for the SCons D 
tooling at the
moment which is why it has not got merged into the SCons 
mainline for

release in 2.3.0.


Currently I'm Linux only, but I have in the past built fairly 
complicated projects in Windows and I expect to be doing this 
again. This is another reason why I considered scons as a build 
solution because it is cross-platform.


A plea for help: in order for SCons to have sane support for D, 
it needs

people to step up and actively assist evolve the tools either by
providing small test projects exhibiting fails, or running the 
tests on

Windows and providing patches for fails.


Once I get things working well enough on Linux, I'll give it a 
try. Right now though, I'm bogged down with Linux, I need to make 
some breathing space first.


BTW, I use virtualbox as well as KVM extensively to run different 
OS's on same machine, so maybe you can try that route to get a 
Windows VM running for tests?


--rt



Re: What is the use case for this weird switch mecanism

2012-10-30 Thread Philippe Sigaud
 What the hell statement is supposed to do ? And what is the use case
 for this ?

 That's weird. I just did a quick test; apparently statement is never
 run. I've no idea why it's allowed or what it's for.

I've no idea why it's authorized, but it saved my day a week ago, in
an automatically-generated switch statement that happened to have a
return true; inserted at the very beginning. No unit test found that
and I saw it only by printing the generated code for another search.

In a way, it's logical: the code path jumps to the matching case, so
it never sees the first statement block before the first case.


Re: isDroppable range trait for slicing to end

2012-10-30 Thread Dmitry Olshansky

10/30/2012 6:53 AM, monarch_dodra пишет:

On Monday, 29 October 2012 at 19:20:34 UTC, Dmitry Olshansky wrote:

Note that this extractSlice notion would save a bit of functionality
for immutable ranges which *would* have slicing, but since they don't
support assign, don't actually verify hasSlicing...


immutable ranges is purely a theoretical notion. (immutable elements
are on the contrary are ubiquitous)


Not *that* theoretical when you think about it. ascii's digits etc are
all immutable ranges. They are a bad example, because they are strings
(ergo un-sliceable), but as a rule of thumb, any global container can be
saved as an immutable range.
For example, I could define first 10
integers as an immutable range. That range would be slice-able, but
would not verify hasSlicing.

You do make a common mistake of confusing a container and a range over 
it. Ranges are means of iteration, they are mutable by the very 
definition - every time you call popFront/popBack iteration state *changes*.


So you can't pop first item of first 10 integers. It's an immutable 
entity that you can't manipulate.


In that sense slicing such an entity (container) is the way of 
extracting a _mutable_ range from it. Yet numbers it cycles through are 
immutable.




The way I see it, maybe a beter solution would be a refinement of:

*hasSlicing:
**r = r[0 .. 1]; MUST work (so infinite is out)
*hasEndSlicing
**r = r[1 .. $]; Must work (intended for infinite, or to verify opDollor)



I suggest to stop there. In other words introduce hasEndSlicing (awful 
name) and check self-assignabliity of both.



To which we could add limited variants: hasLimitedSlicing and
hasLimitedEndSlicing, which would *just* mean we can extract a slice,
but not necessarily re-assign it.


This repeats the same argument of extractSlice albeit differently.


This seems like a simple but efficient solution. Thoughts?


It's not simple. I suggest we drop the no self-assignable slicing 
altogether.


I claim that *if* you can't self assign a slice of a range it basically 
means that you are slicing something that is not meant to be a range but 
rather a container (adapter etc.).




The issue that I still have with slicing (between to indexes) infinite
ranges is that even on an implementation stand point, it makes little
sense. There is little other way to implement it other than return
this[i .. $].takeExactly(j - i); In which case, it would make little
sense to require it as a primitive.


Yup like I told:
- Infinite range just plain can't support slicing on 2 indexes (they 
have limited slicing, or one side slicing not full slicing)


It's just I suggested to exclude opSlice(x,y) from primitives unlike in 
my first post where I didn't think of solving self-assigning problem.



I'd rather have a global function in range.d, that would provide the
implementation for any infinite range that provides has[Limited]EndSlicing.


Maybe though the utility of such a helper is limited (pun intended).


--
Dmitry Olshansky


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread Andrej Mitrovic
On 10/30/12, Philippe Sigaud philippe.sig...@gmail.com wrote:
 I've no idea why it's authorized

There could be a label for a goto there.


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread bearophile

deadalnix:

What the hell statement is supposed to do ? And what is the use 
case for this ?


See also this bug report I've opened time ago:
http://d.puremagic.com/issues/show_bug.cgi?id=3820

Bye,
bearophile


Re: To avoid some linking errors

2012-10-30 Thread Jesse Phillips

On Tuesday, 30 October 2012 at 07:43:41 UTC, Walter Bright wrote:

On 10/29/2012 11:08 PM, Daniel Murphy wrote:

void foo();


There will be no line information for the above.

 void main()
 {
foo();

For this, yes, but that is not what is being asked for.


Well, personally I think I would enjoy having this line number. 
The more information the better.


Re: DMD on Haiku?

2012-10-30 Thread Paulo Pinto
On Tuesday, 30 October 2012 at 13:55:42 UTC, Alex Rønne Petersen 
wrote:

On 30-10-2012 14:46, Isak Andersson wrote:

Based on my experience POSIX compliance is like any standard.

You end up getting lots of #ifdef for each POSIX system 
anyway. The
only people that think POSIX is a standard without any 
issues, only

know GNU/Linux.

One thing missing from the list which costs a lot of effort, 
is code

generation.

Based on my toy Solaris experience with DMD, I think it is 
easier to

use LDC or GDC for bringing D to other platforms.

--
Paulo


Yeah, it seems like POSIX kind of failed in the sense that you 
can't
just have a simple posix makefile that works for any posix 
compliant os.


I direct you to the POSIX makefiles of DMD, druntime, and 
phobos. ;)




Which as far as I am aware only work on POSIX == Linux.





Re: Imports with versions

2012-10-30 Thread Paulo Pinto

On Tuesday, 30 October 2012 at 15:00:03 UTC, Jacob Carlborg wrote:

On 2012-10-30 13:25, Paulo Pinto wrote:

.NET and OSGi are similar approaches, because they rely on 
dynamic linking.


The package manager only works with static linking, otherwise 
you might
get into the situation it compiles fine, but runs into version 
conflicts

when running. Common scenario to anyone doing Java development.


I would say, in this case, that you haven't specified the 
versions correctly.


Not really.

Let's say you compile everything fine, but on the deployment
platform, some IT guy has the cool idea of changing some 
configuration

settings.

That change will have as side effect that some third party 
dependencies will now be matched to another version different 
than what was used by the package manager.


You'll spend a few hours trying to track down the issue, in case 
you forget about checking the dynamic resolution order.


This is why Plan 9, Singularity and the Go guys are so much 
against dynamic linking.


--
Paulo


Re: DMD on Haiku?

2012-10-30 Thread Alex Rønne Petersen

On 30-10-2012 19:35, Paulo Pinto wrote:

On Tuesday, 30 October 2012 at 13:55:42 UTC, Alex Rønne Petersen wrote:

On 30-10-2012 14:46, Isak Andersson wrote:

Based on my experience POSIX compliance is like any standard.

You end up getting lots of #ifdef for each POSIX system anyway. The
only people that think POSIX is a standard without any issues, only
know GNU/Linux.

One thing missing from the list which costs a lot of effort, is code
generation.

Based on my toy Solaris experience with DMD, I think it is easier to
use LDC or GDC for bringing D to other platforms.

--
Paulo


Yeah, it seems like POSIX kind of failed in the sense that you can't
just have a simple posix makefile that works for any posix compliant os.


I direct you to the POSIX makefiles of DMD, druntime, and phobos. ;)



Which as far as I am aware only work on POSIX == Linux.





Er... they work on Linux, OS X, FreeBSD, OpenBSD, Solaris/SunOS.

--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Make [was Re: SCons and gdc]

2012-10-30 Thread Jérôme M. Berger
Rob T wrote:
 On Tuesday, 30 October 2012 at 09:42:50 UTC, Russel Winder wrote:
 I would suggest you haven't given SCons long enough to get into the
 SCons idioms, but you have to go with the route that is most comfortable
 for you.
 
 You are right, I'll shut up and keep at it. I get frustrated sometimes,
 but that's no reason to vent in here. Sorry.
 
 I do not see why you need to scan subfolders recursively. This is a
 traditional Make approach. SCons does things differently even for highly
 hierarchical systems. In particular use of SConscript files handles
 everything. So I do not think this is a SCons problem.
 
 The only reason is to automate the construction of a list of source
 files for building. I'm used to using automated build scripts, which
 require only a minimal of manual input. For example, if I add a new
 source file to the mix, then I am expecting to not have to modify a
 SConstruct file to include it.
 
 I see mention of placing a SConsript file in each subdir, but that's a
 fair amount of manual overhead to bother with. so there must be another
 way?
 

The following SConstruct will scan all subfolders of the current
folder for d sources and compile them into a foo program.

==8--
import os
sources = []
for dirpath, dirnames, filenames in os.walk (.):
sources += [ os.path.join (dirpath, f)
 for f in filenames
 if f.endswith (.d) ]
Program (foo, sources)
--8==

Jerome
-- 
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


[OT] .NET is compiled to native code in Windows Phone 8

2012-10-30 Thread Paulo Pinto
Now Build 2012 is happening and the new Windows Phone 8 features 
have been revealed.


One of the most interesting is that .NET applications are 
actually compiled to native code as well, before being made 
available for download.


http://blogs.msdn.com/b/dotnet/archive/2012/10/30/announcing-the-release-of-the-net-framework-for-windows-phone-8.aspx

Assuming Microsoft eventually releases a native code compiler for 
C# (better than NGEN), this will make D use harder in the 
enterprise. :\


--
Paulo


Re: Why D is annoying =P

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 17:09, Tobias Pankrath wrote:


I agree, that the online docs are insufficient for learning the
language.
But that's the case for phobos, too. Both are just a listings of
what is there and don't give you any overview of what design
decisions were made and what implications they have.

Just take a look std.container.

I hope that Ali Çehreli efforts will be midterm solution at
least for the language docs. Maybe he should credits by linking
from the homepage to his book.


A language needs several types of documentation. Reference documentation 
(basically what we have now), higher level documentation, tutorials and 
examples.


--
/Jacob Carlborg


Re: To avoid some linking errors

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 18:34, Brad Roberts wrote:


I'm all for idealistic views, but neither of those matches reality in any 
meaningful way.  What I outlined is actually
practical.


Well yes, for an already existing tool. But I see no reason why one 
wouldn't use this approach when developing a new tool. Lately, all my 
tools I write are built as libraries and a fairly thin executable that 
calls the library. Except that one can use the library in other tools it 
separates and modularize the code, easier to test and so on.


--
/Jacob Carlborg


Re: Imports with versions

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 19:42, Paulo Pinto wrote:


Not really.

Let's say you compile everything fine, but on the deployment
platform, some IT guy has the cool idea of changing some configuration
settings.

That change will have as side effect that some third party dependencies
will now be matched to another version different than what was used by
the package manager.


Again, then you haven't specified the dependencies correctly. If you 
just specify that foo depends on bar then you have only yourself to 
blame. You need to specify the exact version, i.e. bar-1.2.3. If you 
cannot specify the exact version of an indirect dependency then you're 
not using a very good tools.


RubyGems together with Bundler is a great package manager. You specify 
the direct dependencies of your software and the tool will write down 
all dependencies, direct and indirect, in a locked file, including all 
versions.


When you deploy your software it will install and use the packages from 
the locked file. Your code cannot access any other dependencies that 
is not listed in the file.



You'll spend a few hours trying to track down the issue, in case you
forget about checking the dynamic resolution order.

This is why Plan 9, Singularity and the Go guys are so much against
dynamic linking.


One can always do stupid things, it can be quite hard to protect 
yourself from that. I mean, the IT guy can just replace your newly 
deployed application with a different version and no static linking in 
the world can help you there.


--
/Jacob Carlborg


Re: To avoid some linking errors

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 08:55:26PM +0100, Jacob Carlborg wrote:
 On 2012-10-30 18:34, Brad Roberts wrote:
 
 I'm all for idealistic views, but neither of those matches reality in
 any meaningful way.  What I outlined is actually practical.
 
 Well yes, for an already existing tool. But I see no reason why one
 wouldn't use this approach when developing a new tool. Lately, all
 my tools I write are built as libraries and a fairly thin executable
 that calls the library. Except that one can use the library in other
 tools it separates and modularize the code, easier to test and so
 on.
[...]

I have recently come to the conclusion that *all* programs should be
written as (potential) libraries with thin executable wrappers. Any
other approach will suffer from reusability issues down the road.


T

-- 
One Word to write them all, One Access to find them, One Excel to count
them all, And thus to Windows bind them. -- Mike Champion


Re: Imports with versions

2012-10-30 Thread Jacob Carlborg

On 2012-10-30 21:02, H. S. Teoh wrote:


But doesn't that mean the version wasn't correctly depended upon?

There should NEVER be generic dynamic library dependencies (i.e., I can
link with any version of libc), because they are almost always wrong in
some way. Maybe not obvious at first, but still wrong. They should
always depend on the *exact* version (or versions) of a library.
Anything else is fundamentally broken and will eventually cause
headaches and sleepless nights.

Mind you, though, a lot of libraries have a totally broken versioning
system. Many library authors believe that the version only needs to be
bumped when the API changes. Or worse, only when the existing API
changes (new parts of the API are discounted.) That is actually wrong.
The version needs to be bumped every time the *ABI* changes. An ABI
change can include such things as compiling with different flags, or
with a different compiler (*cough*gdc*dmd*cough*), *even when the source
code hasn't been touched*. Any library that breaks this rule is
essentially impossible to work with, because there is no way to
guarantee that what gets linked at runtime is actually what you think it
is.

Unfortunately, in practice, both of the above are broken repeatedly.


I completely agree. See my answer about RubyGems and Bundler:

http://forum.dlang.org/thread/ycigekrnsvjuulbxu...@forum.dlang.org#post-k6pc33:241ma3:241:40digitalmars.com

Packages in RubyGems are using Semantic Versioning:

http://semver.org/

Which is supposed to help with these problems. But Ruby doesn't have a 
problem with breaking an ABI.


--
/Jacob Carlborg


Re: Why D is annoying =P

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 08:50:50PM +0100, Jacob Carlborg wrote:
 On 2012-10-30 17:09, Tobias Pankrath wrote:
 
 I agree, that the online docs are insufficient for learning the
 language.  But that's the case for phobos, too. Both are just a
 listings of what is there and don't give you any overview of what
 design decisions were made and what implications they have.
 
 Just take a look std.container.
 
 I hope that Ali Çehreli efforts will be midterm solution at
 least for the language docs. Maybe he should credits by linking
 from the homepage to his book.
 
 A language needs several types of documentation. Reference
 documentation (basically what we have now), higher level
 documentation, tutorials and examples.
[...]

I contend that much of the current documentation isn't even up to
reference standard.

Incompleteness, for one thing. Things like Throwable and Exception
aren't even documented right now (though this has been fixed in git
HEAD). I'm sure there are many other fundamental holes.

And a randomly-sorted list of unrelated module items does not constitute
a reference, either. It has to be at least sorted alphabetically, or
preferably, by logical categories. And things like class members need to
be properly indented (I think this was fixed recently) instead of being
flattened out, making it impossible to discern whether it belongs to the
previous declaration or the global module scope. Moreover, nested
classes/structs, etc., need to be put AFTER simpler members. It's
basically unreadable when, for example, two int members are separated by
the docs of a 2-page nested struct.

And don't even get me started on navigability. Dumping a morass of
#-links at the top of the page does not a navigable page make. Some
modules NEED to have docs split into separate pages. A 10-page infodump
of randomly sorted items is simply impossible to use effectively.
Clickable identifiers would be nice, so that you don't have to
separately navigate and lookup a particular symbol when you're not sure
what it means, while trying to keep track of where you left off (and I
thought we were in the age of automation...).


T

-- 
The right half of the brain controls the left half of the body. This means that 
only left-handed people are in their right mind. -- Manoj Srivastava


Re: Imports with versions

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 07:42:13PM +0100, Paulo Pinto wrote:
 On Tuesday, 30 October 2012 at 15:00:03 UTC, Jacob Carlborg wrote:
 On 2012-10-30 13:25, Paulo Pinto wrote:
 .NET and OSGi are similar approaches, because they rely on
 dynamic linking.
 
 The package manager only works with static linking, otherwise you
 might get into the situation it compiles fine, but runs into version
 conflicts when running. Common scenario to anyone doing Java
 development.
 
 I would say, in this case, that you haven't specified the versions
 correctly.
 
 Not really.
 
 Let's say you compile everything fine, but on the deployment platform,
 some IT guy has the cool idea of changing some configuration settings.
 
 That change will have as side effect that some third party
 dependencies will now be matched to another version different than
 what was used by the package manager.

But doesn't that mean the version wasn't correctly depended upon?

There should NEVER be generic dynamic library dependencies (i.e., I can
link with any version of libc), because they are almost always wrong in
some way. Maybe not obvious at first, but still wrong. They should
always depend on the *exact* version (or versions) of a library.
Anything else is fundamentally broken and will eventually cause
headaches and sleepless nights.

Mind you, though, a lot of libraries have a totally broken versioning
system. Many library authors believe that the version only needs to be
bumped when the API changes. Or worse, only when the existing API
changes (new parts of the API are discounted.) That is actually wrong.
The version needs to be bumped every time the *ABI* changes. An ABI
change can include such things as compiling with different flags, or
with a different compiler (*cough*gdc*dmd*cough*), *even when the source
code hasn't been touched*. Any library that breaks this rule is
essentially impossible to work with, because there is no way to
guarantee that what gets linked at runtime is actually what you think it
is.

Unfortunately, in practice, both of the above are broken repeatedly.


T

-- 
We've all heard that a million monkeys banging on a million typewriters
will eventually reproduce the entire works of Shakespeare.  Now, thanks
to the Internet, we know this is not true. -- Robert Wilensk


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread deadalnix

Le 30/10/2012 18:57, Andrej Mitrovic a écrit :

On 10/30/12, Philippe Sigaudphilippe.sig...@gmail.com  wrote:

I've no idea why it's authorized


There could be a label for a goto there.


That still don't explain what the use case is.


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread deadalnix

Le 30/10/2012 18:47, Philippe Sigaud a écrit :

What the hell statement is supposed to do ? And what is the use case
for this ?


That's weird. I just did a quick test; apparently statement is never
run. I've no idea why it's allowed or what it's for.


I've no idea why it's authorized, but it saved my day a week ago, in
an automatically-generated switch statement that happened to have a
return true; inserted at the very beginning. No unit test found that
and I saw it only by printing the generated code for another search.

In a way, it's logical: the code path jumps to the matching case, so
it never sees the first statement block before the first case.


I usually want to avoid code working in an unexpected way. Even when it 
make code work when I expect it shouldn't.


I wrote about this publicly few mounth ago, and, considering how much 
return I got, I'm not the only one.


Re: To avoid some linking errors

2012-10-30 Thread Andrei Alexandrescu

On 10/30/12 2:13 PM, Jesse Phillips wrote:

On Tuesday, 30 October 2012 at 07:43:41 UTC, Walter Bright wrote:

On 10/29/2012 11:08 PM, Daniel Murphy wrote:

void foo();


There will be no line information for the above.

 void main()
 {
 foo();

For this, yes, but that is not what is being asked for.


Well, personally I think I would enjoy having this line number. The more
information the better.


Not sure I'm following but essentially if foo() is undefined the most 
interesting file/line references would be for all calls to foo(). The 
lines declaring foo() are nice to have but much less interesting.


Andrei


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread Nick Sabalausky
On Tue, 30 Oct 2012 21:39:31 +0100
deadalnix deadal...@gmail.com wrote:

 Le 30/10/2012 18:57, Andrej Mitrovic a écrit :
  On 10/30/12, Philippe Sigaudphilippe.sig...@gmail.com  wrote:
  I've no idea why it's authorized
 
  There could be a label for a goto there.
 
 That still don't explain what the use case is.

Obfuscated coding contests?



Re: Imports with versions

2012-10-30 Thread Paulo Pinto

On Tuesday, 30 October 2012 at 20:08:04 UTC, Jacob Carlborg wrote:

On 2012-10-30 19:42, Paulo Pinto wrote:


Not really.

Let's say you compile everything fine, but on the deployment
platform, some IT guy has the cool idea of changing some 
configuration

settings.

That change will have as side effect that some third party 
dependencies
will now be matched to another version different than what was 
used by

the package manager.


Again, then you haven't specified the dependencies correctly. 
If you just specify that foo depends on bar then you have 
only yourself to blame. You need to specify the exact version, 
i.e. bar-1.2.3. If you cannot specify the exact version of an 
indirect dependency then you're not using a very good tools.




This cannot be enforced on runtime for most languages, that was 
why I was generalizing.


For example C and C++ require the programmer to do this, somehow.

Java requires you bundled something like OSGi with your 
application.


From the languages I have real project experience, only Groovy 
and .NET provide out of the box mechanisms for runtime 
validations given in the package manager.


--
Paulo


Re: To avoid some linking errors

2012-10-30 Thread Brad Roberts
On Tue, 30 Oct 2012, Andrei Alexandrescu wrote:

 On 10/30/12 2:13 PM, Jesse Phillips wrote:
  On Tuesday, 30 October 2012 at 07:43:41 UTC, Walter Bright wrote:
   On 10/29/2012 11:08 PM, Daniel Murphy wrote:
void foo();
   
   There will be no line information for the above.
   
void main()
{
foo();
   
   For this, yes, but that is not what is being asked for.
  
  Well, personally I think I would enjoy having this line number. The more
  information the better.
 
 Not sure I'm following but essentially if foo() is undefined the most
 interesting file/line references would be for all calls to foo(). The lines
 declaring foo() are nice to have but much less interesting.
 
 Andrei

I don't think I agree.  The key questions that come to mind from an 
undefined symbol error at link time are:

1) Why did the compiler believe it would be?  If it didn't think it was a 
valid usable symbol, it would have already errored at the call site during 
semantic analysis.

2) what's the fully qualified name of the symbol?  Maybe the compiler is 
matching a different symbol than expected.  However, if that was the case, 
either it'd link against the unexpected match successfully, or see also 
#1.


Having the location of some or all of the call sites might help you find 
#1, but that's a poor substitute for pointing directly at #1.


Re: DMD on Haiku?

2012-10-30 Thread Paulo Pinto
On Tuesday, 30 October 2012 at 18:53:23 UTC, Alex Rønne Petersen 
wrote:

On 30-10-2012 19:35, Paulo Pinto wrote:
On Tuesday, 30 October 2012 at 13:55:42 UTC, Alex Rønne 
Petersen wrote:

On 30-10-2012 14:46, Isak Andersson wrote:
Based on my experience POSIX compliance is like any 
standard.


You end up getting lots of #ifdef for each POSIX system 
anyway. The
only people that think POSIX is a standard without any 
issues, only

know GNU/Linux.

One thing missing from the list which costs a lot of 
effort, is code

generation.

Based on my toy Solaris experience with DMD, I think it is 
easier to

use LDC or GDC for bringing D to other platforms.

--
Paulo


Yeah, it seems like POSIX kind of failed in the sense that 
you can't
just have a simple posix makefile that works for any posix 
compliant os.


I direct you to the POSIX makefiles of DMD, druntime, and 
phobos. ;)




Which as far as I am aware only work on POSIX == Linux.





Er... they work on Linux, OS X, FreeBSD, OpenBSD, Solaris/SunOS.


Ok I was a bit stupid with my remark, sorry about that.

Anyway, I remember when I tried my toy experiment with porting 
DMD for Solaris I had to do some patches.


You would be surprised what commercial UNIX systems understand as 
POSIX vs what the standard says. Somehow I don't miss my days 
porting software among UNIX platforms.


--
Paulo


Re: To avoid some linking errors

2012-10-30 Thread Andrei Alexandrescu

On 10/30/12 5:07 PM, Brad Roberts wrote:

On Tue, 30 Oct 2012, Andrei Alexandrescu wrote:


On 10/30/12 2:13 PM, Jesse Phillips wrote:

On Tuesday, 30 October 2012 at 07:43:41 UTC, Walter Bright wrote:

On 10/29/2012 11:08 PM, Daniel Murphy wrote:

void foo();


There will be no line information for the above.


void main()
{
foo();


For this, yes, but that is not what is being asked for.


Well, personally I think I would enjoy having this line number. The more
information the better.


Not sure I'm following but essentially if foo() is undefined the most
interesting file/line references would be for all calls to foo(). The lines
declaring foo() are nice to have but much less interesting.

Andrei


I don't think I agree.  The key questions that come to mind from an
undefined symbol error at link time are:

1) Why did the compiler believe it would be?  If it didn't think it was a
valid usable symbol, it would have already errored at the call site during
semantic analysis.


Not getting this at all. All I'm saying is that if the compiler says I 
can't find that foo() you're asking for, the most interesting piece of 
information for me is where did I ask for it?



2) what's the fully qualified name of the symbol?  Maybe the compiler is
matching a different symbol than expected.  However, if that was the case,
either it'd link against the unexpected match successfully, or see also
#1.


That's in the symbol.


Andrei


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread bearophile

Nick Sabalausky:


Obfuscated coding contests?


It's there to help programmers create more bugs, of course :o)

Bye,
bearophile


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread Era Scarecrow

On Tuesday, 30 October 2012 at 21:11:57 UTC, bearophile wrote:

Nick Sabalausky:


Obfuscated coding contests?


It's there to help programmers create more bugs, of course :o)


 Maybe variable declaration (as long as they are default(s))? Has 
a certain amount of sense, but makes more sense to do it outside 
the switch case...


Re: To avoid some linking errors

2012-10-30 Thread Brad Roberts
On Tue, 30 Oct 2012, Andrei Alexandrescu wrote:

 On 10/30/12 5:07 PM, Brad Roberts wrote:
  1) Why did the compiler believe it would be?  If it didn't think it was a
  valid usable symbol, it would have already errored at the call site during
  semantic analysis.
 
 Not getting this at all. All I'm saying is that if the compiler says I can't
 find that foo() you're asking for, the most interesting piece of information
 for me is where did I ask for it?

Ok, so it points to a place in the code where you used it.  You look at 
that and say, yup, I did indeed use it.  Not surprising.  Now, why doesn't 
the linker find it?  Why did the compiler believe it existed?

The site of the usage isn't remotely useful for answering either of those 
questions and those are the ones that form the disconnect between the 
compiler, who believed it existed and let the code pass to the next 
stage), and the linker.


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread H. S. Teoh
On Tue, Oct 30, 2012 at 04:51:33PM -0400, Nick Sabalausky wrote:
 On Tue, 30 Oct 2012 21:39:31 +0100
 deadalnix deadal...@gmail.com wrote:
 
  Le 30/10/2012 18:57, Andrej Mitrovic a écrit :
   On 10/30/12, Philippe Sigaudphilippe.sig...@gmail.com  wrote:
   I've no idea why it's authorized
  
   There could be a label for a goto there.
  
  That still don't explain what the use case is.
 
 Obfuscated coding contests?

Is this the official announcement for the inception of the IODCC? ;-) 
(cf. www.ioccc.org).


T

-- 
Those who've learned LaTeX swear by it. Those who are learning LaTeX swear at 
it. -- Pete Bleackley


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread bearophile

Era Scarecrow:

 Maybe variable declaration (as long as they are default(s))? 
Has a certain amount of sense, but makes more sense to do it 
outside the switch case...


Declaring variables there is dangerous:

import std.stdio;
struct Foo {
int x = 10;
}
void main() {
int bar;
switch(bar) {
Foo f;
case 10: break;
default: writeln(f); // prints garbage
}
}


Bye,
bearophile


Re: To avoid some linking errors

2012-10-30 Thread Andrei Alexandrescu

On 10/30/12 5:40 PM, Brad Roberts wrote:

On Tue, 30 Oct 2012, Andrei Alexandrescu wrote:


On 10/30/12 5:07 PM, Brad Roberts wrote:

1) Why did the compiler believe it would be?  If it didn't think it was a
valid usable symbol, it would have already errored at the call site during
semantic analysis.


Not getting this at all. All I'm saying is that if the compiler says I can't
find that foo() you're asking for, the most interesting piece of information
for me is where did I ask for it?


Ok, so it points to a place in the code where you used it.  You look at
that and say, yup, I did indeed use it.  Not surprising.


In the presence of overloading, that is most often surprising. And 
usually it's not I who uses it, it's transitively called by some code 
I write. That's the hardest part.



Now, why doesn't
the linker find it?


Because it's declared but not explicitly made part of the project.


Why did the compiler believe it existed?


Grep takes care of that. Finding the declaration is never a problem.


The site of the usage isn't remotely useful for answering either of those
questions and those are the ones that form the disconnect between the
compiler, who believed it existed and let the code pass to the next
stage), and the linker.


Non-issue. Grep takes care of that. Finding the cross-references and the 
overloading part are the hard problems here. This is so clear to me for 
so many reasons, I am paralyzed by options.



Andrei


Re: assert(false, ...) doesn't terminate program?!

2012-10-30 Thread Walter Bright

On 10/30/2012 5:53 AM, Don Clugston wrote:
  I think the code would be
 better written as:

  if (toInt() == 0) throw new Error();

 If you really must have a divide by zero fault,

  if (toInt() == 0) divideByZero();

 where:

  void divideByZero()
  {
   static int x;
   *cast(int*)0 = x / 0;
  }

 And that works because writes to 0 _are_ considered a side-effect?
 Is that guaranteed to work?

Writing through a pointer is considered a side effect.


Re: What is the use case for this weird switch mecanism

2012-10-30 Thread Era Scarecrow

On Tuesday, 30 October 2012 at 21:40:26 UTC, bearophile wrote:

Era Scarecrow:

Maybe variable declaration (as long as they are default(s))?

Declaring variables there is dangerous:



switch(bar) {
Foo f;
case 10: break;
default: writeln(f); // prints garbage
}


 Then it's as though it were '= void;' by default. Most curious. 
Honestly I'd say it's illegal to have something before any 
callable case; Besides for goto's it's illegal to jump past 
declarations, yet this switch case allows it.


 I'd say one of two things must happen then.

  1) Code before the first case is disallowed
  2) Code before the first case always runs

 Option 2 seems silly and unneeded, except it allows a small 
scope during the switch call, which is it's only possible 
advantage. The only other advantage is you could have a case 
disabled and enable it during certain debugging cases, but in 
those cases why not do the whole block?



switch(bar) {

  static if (DEBUG) {
  case -10: /*disabled case, or something like that*/
  }
  Foo f;

case 10: break;




Re: To avoid some linking errors

2012-10-30 Thread Andrei Alexandrescu

On 10/30/12 6:00 PM, Walter Bright wrote:

I find it ironic that you mention grep. Grep is what I suggested in the
link to dealing with the issue, but nobody likes that answer.

http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined


Mangling is the issue here.


The site of the usage isn't remotely useful for answering either of
those
questions and those are the ones that form the disconnect between the
compiler, who believed it existed and let the code pass to the next
stage), and the linker.


Non-issue. Grep takes care of that. Finding the cross-references and the
overloading part are the hard problems here. This is so clear to me
for so many
reasons, I am paralyzed by options.


If you're missing a definition, you need to find the declaration, not
the use, because that's where the missing definition needs to go.
And finally, grepping for an un-mangled name doesn't work for overloaded
names.


Upon more thinking, I agree that BOTH declarations and call sites must 
be clearly pointed in the errors output by the linker. But I'll still 
point out that the real difficulty is finding the calls, not the 
declarations. I don't ever remember having a hard time where is this 
declared? Instead, the hard problem has always been What is the call 
chain that leads to an undefined symbol?



Andrei


Re: To avoid some linking errors

2012-10-30 Thread Walter Bright

On 10/30/2012 2:49 PM, Andrei Alexandrescu wrote:

On 10/30/12 5:40 PM, Brad Roberts wrote:

On Tue, 30 Oct 2012, Andrei Alexandrescu wrote:


On 10/30/12 5:07 PM, Brad Roberts wrote:

1) Why did the compiler believe it would be?  If it didn't think it was a
valid usable symbol, it would have already errored at the call site during
semantic analysis.


Not getting this at all. All I'm saying is that if the compiler says I can't
find that foo() you're asking for, the most interesting piece of information
for me is where did I ask for it?


Ok, so it points to a place in the code where you used it.  You look at
that and say, yup, I did indeed use it.  Not surprising.


In the presence of overloading, that is most often surprising. And usually it's
not I who uses it, it's transitively called by some code I write. That's the
hardest part.


Now, why doesn't
the linker find it?


Because it's declared but not explicitly made part of the project.


Why did the compiler believe it existed?


Grep takes care of that. Finding the declaration is never a problem.


I find it ironic that you mention grep. Grep is what I suggested in the link to 
dealing with the issue, but nobody likes that answer.


http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined



The site of the usage isn't remotely useful for answering either of those
questions and those are the ones that form the disconnect between the
compiler, who believed it existed and let the code pass to the next
stage), and the linker.


Non-issue. Grep takes care of that. Finding the cross-references and the
overloading part are the hard problems here. This is so clear to me for so many
reasons, I am paralyzed by options.


If you're missing a definition, you need to find the declaration, not the use, 
because that's where the missing definition needs to go.


And finally, grepping for an un-mangled name doesn't work for overloaded names.



Re: 48 hour game jam

2012-10-30 Thread Manu
On 19 October 2012 01:12, F i L witte2...@gmail.com wrote:

 Trying to build in Linux, but having problems.

 I follow the steps from github wiki How to build under Windows, except I
 run 'Fuji/create_project.sh' instead of '.bat'... now I'm a bit confused as
 to what steps to take. Running 'Fuji/make' has errors, and running
 'Stache/make_project.sh' - 'make' gives me:

 make[1]: *** No targets.  Stop.
 make: *** [Stache] Error 2

 which I assume is because Fuji isn't built (?). Help please!

 Nice screenshot, btw :)


Did you ever follow my linux build instructions?
I'd be curious to know if anyone else can successfully build it.


Re: 48 hour game jam

2012-10-30 Thread Manu
On 18 October 2012 23:58, Jacob Carlborg d...@me.com wrote:

 On Thursday, 18 October 2012 at 20:11:56 UTC, Manu wrote:

  Ah yes, what do the OSX OpenGL libs look like? GLX is only a very thin
 front end on a fairly conventional OpenGL. It's only a couple of functions
 that would be replaced by some mac variant I expect.


 Mac OS X has the OpenGL framework:

 https://developer.apple.com/**library/mac/#documentation/**
 GraphicsImaging/Reference/CGL_**OpenGL/Reference/reference.**
 html#//apple_ref/doc/uid/**TP40001186https://developer.apple.com/library/mac/#documentation/GraphicsImaging/Reference/CGL_OpenGL/Reference/reference.html%23//apple_ref/doc/uid/TP40001186

 And a couple of high level Objective-C classes. This is the programming
 guides for OpenGL on Mac OS X:

 https://developer.apple.com/**library/mac/#documentation/**
 GraphicsImaging/Conceptual/**OpenGL-MacProgGuide/opengl_**
 intro/opengl_intro.htmlhttps://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_intro/opengl_intro.html


  I'll come on IRC in 5 minutes or so.


 I won't be online tonight, it's getting late here. Tomorrow or perhaps
 saturday.


I've fixed all the outstanding Linux stuff, I'd love to finish off that OSX
stuff some time soon.


Re: 48 hour game jam

2012-10-30 Thread Manu
I have fixed all associated bugs now and the Linux build runs properly.
There was a recently fixed DMD bug that causes a problem with the camera,
if you're using an older DMD, the camera will have problems.

GDC should theoretically work fine, but my binary is out of date (from the
debian package manager) and it doesn't work for me, so I haven't proven it
works.

On 19 October 2012 11:02, Manu turkey...@gmail.com wrote:

 On 19 October 2012 01:12, F i L witte2...@gmail.com wrote:

 Trying to build in Linux, but having problems.

 I follow the steps from github wiki How to build under Windows, except
 I run 'Fuji/create_project.sh' instead of '.bat'... now I'm a bit confused
 as to what steps to take. Running 'Fuji/make' has errors, and running
 'Stache/make_project.sh' - 'make' gives me:

 make[1]: *** No targets.  Stop.
 make: *** [Stache] Error 2

 which I assume is because Fuji isn't built (?). Help please!

 Nice screenshot, btw :)


 I added Linux build instructions to the wiki, includes the Fuji build
 instructions. There are a few differences from Windows.
 The main problem is a bug in premake4 for D, it seems to throw an error
 generating makefiles for my project (D support is new and experimental).
 I plan to look into the premake bug this weekend. You can generate a
 monodevelop project through, which works fine with the Mono-D plugin
 installed. That's how I'm building/running/debugging on Linux at the moment.



Transience of .front in input vs. forward ranges

2012-10-30 Thread H. S. Teoh
Now that Andrei is back (I think?), I want to bring up this discussion
again, because I think it's important.

Recently, in another thread, it was found that std.algorithm.joiner
doesn't work properly with input ranges whose .front value is
invalidated by popFront(). Andrei stated that for input ranges .front
should not be assumed to return a persistent value, whereas for forward
ranges, .front can be assumed to be persistent. However, Jonathan
believes that .front should never be transient.

Obviously, both cannot be the case simultaneously. So we need to decide
exactly what it should be, because the current situation is subtly
broken, and this subtle brokenness is pervasive. For example, I recently
rewrote joiner to eliminate the assumption that .front is persistent,
only to discover that in the unittests, I can't use array() or equal()
(or, for that matter, writefln()), because they apparently all make this
assumption at some point (I didn't bother to find out exactly where).

In other words, right now input ranges really only work with arrays and
array-like objects. Not the generic ranges that Andrei has in mind in
his article On Iteration. Many input ranges will subtly break, the
prime whipping boy example being byLine (which I hate to bring up
because it does not represent the full scope of such transient ranges),
a range that returns in-place permutations of an array, or anything that
reuses a buffer, really.

This situation isn't as simple as input ranges being transient and
forward ranges not, though. I want to bring another example besides the
dead horse byLine() to the spotlight. Let's say you have a range R that
spans all permutations of some starting array A. For efficiency reasons,
we don't want to allocate a new array every time we return a
permutation; so we have an internal buffer in R that holds the current
permutation, which is what .front returns. Then popFront() simply
permutes this buffer in-place. Something like this:

struct AllPermutations(T) {
T[] front, first;
bool done;

this(T[] initialBuf) {
first = initialBuf;
front = first.dup;
}
void popFront() {
nextPermutation(current);
if (equal(front, first))
done = true;
}
@property bool empty() {
return !done;
}
}

This is an input range, according to Andrei's definition. The value of
.front is transient, since popFront() modifies it in-place. According to
Jonathan's definition, however, this isn't a valid range for that very
reason.

Now consider what happens if we add this member:

auto save() {
AllPermutations!T copy;
copy.front = this.front;
copy.first = this.first;
copy.done = this.done;
return copy;
}

This returns a separate instance of the same range, starting with the
current permutation, and ending with the original permutation, as
before. I submit that this makes it a forward range. However, this fails
to be a forward range under Andrei's definition, because forward ranges
require .front to be persistent. So we'd have to modify the range to be
something like this:

struct AllPermutations(T) {
T[] current, first;
bool done;

this(T[] initialBuf) {
first = initialBuf;
current = first.dup;
}
void popFront() {
nextPermutation(current);
if (equal(current, first))
done = true;
}
@property bool empty() {
return !done;
}
@property T[] front() {
return current.dup; // --- note this line
}
auto save() {
AllPermutations!T copy;
copy.front = this.front;
copy.first = this.first;
copy.done = this.done;
return copy;
}

}

Note that now we have to duplicate the output array every time .front is
accessed. So whatever gains we may have had by using nextPermutation to
modify the array in-place is lost, just so that we can conform to an
arbitrary standard of what a forward range is.

Under Jonathan's definition, we'd have to incur this cost regardless of
whether we had save() or not, since .front is *always* required to be
persistent.

But I propose that the correct solution is to recognize that whether or
not .front is transient is orthogonal to whether a range is an input
range or a forward range. Many algorithms actually don't care if .front
is persistent or 

  1   2   >