Re: runtime hook for Crash on Error

2012-06-04 Thread Don Clugston

On 04/06/12 21:29, Steven Schveighoffer wrote:

On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston  wrote:


1. There exist cases where you cannot know why the assert failed.
2. Therefore you never know why an assert failed.
3. Therefore it is not safe to unwind the stack from a nothrow function.

Spot the fallacies.

The fallacy in moving from 2 to 3 is more serious than the one from 1
to 2: this argument is not in any way dependent on the assert occuring
in a nothrow function. Rather, it's an argument for not having
AssertError at all.


I'm not sure that is the issue here at all. What I see is that the
unwinding of the stack is optional, based on the assumption that there's
no "right" answer.

However, there is an underlying driver for not unwinding the stack --
nothrow. If nothrow results in the compiler optimizing out whatever
hooks a function needs to properly unwind itself (my limited
understanding is that this helps performance), then there *is no
choice*, you can't properly unwind the stack.

-Steve


No, this whole issue started because the compiler currently does do 
unwinding whenever it can. And Walter claimed that's a bug, and it 
should be explicitly disabled.


It is, in my view, an absurd position. AFAIK not a single argument has 
been presented in favour of it. All arguments have been about "you 
should never unwind Errors".


Re: Add compile time mutable variable type

2012-06-04 Thread Chang Long

On Tuesday, 5 June 2012 at 05:39:34 UTC, Ali Çehreli wrote:

On 06/04/2012 06:22 PM, Chang Long wrote:

> The previous two example is not correct, please see this one:

Your post sounds interesting but there are lots of errors:

> size_t line l;  <-- ERROR

> void this(string name) { <-- ERROR

etc. Can you actually compile that code?

It would be better if you can present your question with less 
code.


Ali

this is what I mean, I can't make it more smaller.
---
abstract class Template_parameter_Base{
TypeInto ti;
string name ;
this(TypeInto ti, string name) {
this.ti = ti ;
this.name = name ;
}
}
class Template_parameter(T) : Template_parameter_Base {
T   value ;
alias value this;
void assign(ref T t){
value = t ;
}
}
class Template_Engine {
string name ;
Template_parameter_Base[string] parameters ;
void this(string name) {
this.name = name ;
}
}
class Template(string name) {
	static compile_time_mutable(Template_Engine)  engine = new 
Template_Engine!name ;

void assign(string name, T)(ref T t){
static if(name in vars) {
			static const parameter	= cast(Template_parameter!T) 
engine.parameters[name];

static assert( parameter.ti == typeid(T) );
} else {
			static const parameter	=  new Template_parameter!T(typeid(T), 
name) ;

static engine.parameters[name]  =  parameter ;
}
parameter.assign(t);
}
}
void main(){
static tpl = new Template!"home_page" ;
auto  is_login  =  false ;
tpl.assign!"is_login"( is_login) ) ;
if( is_login ) {
tpl.assign!"user"( new User() ) ;
}

auto string switch_user = "new_user_email" ;
if( switch_user !is null ) {
tpl.assign!"user"( new User(switch_user) ) ;
}

tpl.assign!"user"( true )  ; // static assert error
}
---

After think what is done,  a better way is put the 
Template_parameter on the annotation data section.


some things like this:
class Template_Engine(string name) {
 static this_ti =  typeid( typeof(this) ) ;
 void assign(string name, T)(ref T t){
static const parameter = new Template_parameter!(T, 
name) ;
ti.createAnnotation("Template_Engine_Parameter", 
parameter);

parameter.assign(t);
  }
  void render(){
   auto parameters =  
this_ti.getParametersByName("Template_Engine_Parameter") ;
   // compile template to dynamic link library by 
template file and parameters

 }
}



Re: More synchronized ideas

2012-06-04 Thread Nathan M. Swan

On Tuesday, 5 June 2012 at 05:14:36 UTC, Nathan M. Swan wrote:

On Monday, 4 June 2012 at 11:17:45 UTC, Michel Fortin wrote:
After trying to make sense of the thread "synchronized 
(this[.classinfo]) in druntime and phobos", I had to write my 
opinion on all this somewhere that wouldn't be instantly lost 
in a bazilion of posts. It turned out into something quite 
elaborate.





This encourages the bad practice (IMO) of shared data. Only a 
single thread should have the dictionary, with an AddWordMsg 
struct and a ConfirmWordMsg struct.


Using a message passing approach, the client thread sends an 
AddWordMsg and continues while the word is added concurrently, 
making it more efficient.


My non-expert opinion,
NMS


Though on the flip side, part of the D ideology is not 
restricting the programmer to a single paradigm. So maybe I like 
the idea :)


Data sharing should still be discouraged.

NMS




Re: Add compile time mutable variable type

2012-06-04 Thread Ali Çehreli

On 06/04/2012 06:22 PM, Chang Long wrote:

> The previous two example is not correct, please see this one:

Your post sounds interesting but there are lots of errors:

> size_t line l;  <-- ERROR

> void this(string name) { <-- ERROR

etc. Can you actually compile that code?

It would be better if you can present your question with less code.

Ali



Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Tuesday, 5 June 2012 at 05:28:48 UTC, Roman D. Boiko wrote:
... without deep analysis I can't come up with a good API / 
design for that (without overcomplicating it). Probably keeping 
mutable and immutable APIs separate is the best choice. Will 
return to this problem once I get a bit of free time.
Simplest and possibly the best approach is to provide an 
immutable wrapper over mutable implementation, but that may be 
difficult to make efficient given the need to support insert / 
delete as common operations.




Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 22:22:34 UTC, Roman D. Boiko wrote:

On Monday, 4 June 2012 at 22:06:49 UTC, Dmitry Olshansky wrote:

On 05.06.2012 1:56, Roman D. Boiko wrote:
so range API doesn't fit that (but could with a tweak - 
returning tail instead of mutating in popFront). If trie API 
will have similar problems, then I need to invent my own. I 
understand that immutability is not your priority for GSoC, 
though.


Well I might remove obstacles, if you outline your design more 
clearly.
OK, thanks! I'll go through your code first to understand it 
better. But even before that I need to finish an important 
support request from my past customer...
Basically, the most important aspect of immutability is returning 
a new instance of data structure on each insert / delete / 
update, and keeping the old one unchanged, instead of performing 
update in-place. This may not fit your most common use cases, 
though. Would be great to enable such semantics via policies, but 
without deep analysis I can't come up with a good API / design 
for that (without overcomplicating it). Probably keeping mutable 
and immutable APIs separate is the best choice. Will return to 
this problem once I get a bit of free time.


Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Tuesday, 5 June 2012 at 04:42:07 UTC, Dmitry Olshansky wrote:

On 05.06.2012 2:28, Roman D. Boiko wrote:
My example concern was about fundamental problem of range APIs 
for
immutable data structures, which is not possible to emulate: 
popFront is

mutating by design.


Keep in mind Ranges are temporary object most of the time. They 
are grease for wheels of algorithms. Given data structure S, 
it's  range is R(element of S). Thus for immutable data 
structure range will be mutable entity of immutable element 
type.


Interesting example is immutable strings, that still have 
ranges over them, that even return dchar not an immutable(char).


I'm currently trying this approach (instead of working with 
immutable data structures directly, which would require recursion 
everywhere) in my experimental functional data structures, and it 
looks promising. :)


Re: More synchronized ideas

2012-06-04 Thread Nathan M. Swan

On Monday, 4 June 2012 at 11:17:45 UTC, Michel Fortin wrote:
After trying to make sense of the thread "synchronized 
(this[.classinfo]) in druntime and phobos", I had to write my 
opinion on all this somewhere that wouldn't be instantly lost 
in a bazilion of posts. It turned out into something quite 
elaborate.





This encourages the bad practice (IMO) of shared data. Only a 
single thread should have the dictionary, with an AddWordMsg 
struct and a ConfirmWordMsg struct.


Using a message passing approach, the client thread sends an 
AddWordMsg and continues while the word is added concurrently, 
making it more efficient.


My non-expert opinion,
NMS


Re: Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky

On 05.06.2012 2:28, Roman D. Boiko wrote:

On Monday, 4 June 2012 at 22:21:42 UTC, Dmitry Olshansky wrote:

And you can fake immutability, buy always picking up an unused slot
btw. No need to go beyond logical immutability.

That's applicable in some cases, not in general. But I agree that often
it is possible to optimize if use cases are known.

My example concern was about fundamental problem of range APIs for
immutable data structures, which is not possible to emulate: popFront is
mutating by design.


Keep in mind Ranges are temporary object most of the time. They are 
grease for wheels of algorithms. Given data structure S, it's  range is 
R(element of S). Thus for immutable data structure range will be mutable 
entity of immutable element type.


Interesting example is immutable strings, that still have ranges over 
them, that even return dchar not an immutable(char).


--
Dmitry Olshansky


Re: More synchronized ideas

2012-06-04 Thread Michel Fortin

On 2012-06-05 01:58:04 +, "Jason House"  said:


I was thinking something more like this:

shared class Dictionary
{
   private SynchronizedCounters counters;
   private SynchronizedStringMap translations(counters);

   void addWord(string word, string foreignWord) shared
   {
 // synchronized opIndexAssign call
 translations[word] = foreignWord;
   }
   bool confirmWord(string word, string foreignWord) shared
   {
 // synchronized opIndex call
 string candidate = translations[word];
 if (candidate != foreignWord)
   return false;
 counters.confirmOneWord();
 globalNotifyWordConfirmed(word, foreignWord);
 return true;
   }
}

// All counter operations are embedded here
// No need to review for any unsafe data usage,
// it's all here (future uses will add new methods)
synchronized class SynchronizedCounters
{
   private int confirmed, unconfirmed;
   void addUnconfirmed() { ++unconfirmed; }
   void confirmOneWord() { --unconfirmed; ++confirmed; }
}

// Similar concept, but embeds SynchronizedCounters
// I don't like that, but it's the only way to fully
// embrace D synchronized classes for toy example
synchronized class SynchronizedStringMap
{
   private string[string] translations;
   private SynchronizedCounters counters;
   this(SynchronizedCounters _counters)
 : counters(_counters) {}
   void opIndexAssign(string word, string foreignWord)
   {
 translations[word] = foreignWord;
 counters.addUnconfirmed();
   }
   string opIndex(string word)
   {
 shared string *found = word in translations;
 if (found)
   return *found;
 else
   return null;
   }
}


I see there are some errors, but I get the general concept.

And I can see why you don't like that. It's no longer a 
"SynchronizedStringMap" once it contains the counters: it's just a 
shell encapsulating the parts of Dictionary that needs to be run while 
the mutex is locked. Because you can't lock Dictionary since it has 
callbacks, you have to split code that would normally be together and 
put the synchronized portion in another class just to fit the 
constrains of the synchronized class concept. I think this illustrates 
that associating mutexes to variables is a better approach.


Beside, I wouldn't want to be the one who has to teach about writing 
concurrent code using this paradigm. Even the equivalent C++ code is 
easier to read and explain that this; shouldn't that raise a red flag?



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: More synchronized ideas

2012-06-04 Thread Jason House

On Monday, 4 June 2012 at 17:40:38 UTC, Michel Fortin wrote:
On 2012-06-04 13:15:57 +, "Jason House" 
 said:



If you really want to use synchronized classes, then you should
have two of them.


Valid comment. I thought about creating yet another example 
illustrating that, but I gave up when realizing the silliness 
of it. I mean, yes you can make it work, but at the price of 
writing a lot of boilerplate code just for forwarding things 
around.


Here's a modified implementation of that dictionary class, 
wrapping the translations AA and the two counters in two 
distinct classes:


...

That said, the other point is that it's too easy to shoot 
yourself in the foot using implicit synchronization. It's easy 
to forget you still have a mutex locked when adding the call to 
globalNotifyWordConfirmed, especially if you add this line of 
code later in the development process and you have forgotten 
about the "synchronized" keyword far away at the very top of 
the class declaration. And once you have a deadlock and you've 
identified the culprit, to fix the bug you need to split 
everything that needs to be synchronized into a separate class, 
a tiresome and bug-prone process, just because you can't 
opt-out of the implicit synchronization. Call me nuts if you 
want, but I think this is awful.



I was thinking something more like this:

shared class Dictionary
{
  private SynchronizedCounters counters;
  private SynchronizedStringMap translations(counters);

  void addWord(string word, string foreignWord) shared
  {
// synchronized opIndexAssign call
translations[word] = foreignWord;
  }
  bool confirmWord(string word, string foreignWord) shared
  {
// synchronized opIndex call
string candidate = translations[word];
if (candidate != foreignWord)
  return false;
counters.confirmOneWord();
globalNotifyWordConfirmed(word, foreignWord);
return true;
  }
}

// All counter operations are embedded here
// No need to review for any unsafe data usage,
// it's all here (future uses will add new methods)
synchronized class SynchronizedCounters
{
  private int confirmed, unconfirmed;
  void addUnconfirmed() { ++unconfirmed; }
  void confirmOneWord() { --unconfirmed; ++confirmed; }
}

// Similar concept, but embeds SynchronizedCounters
// I don't like that, but it's the only way to fully
// embrace D synchronized classes for toy example
synchronized class SynchronizedStringMap
{
  private string[string] translations;
  private SynchronizedCounters counters;
  this(SynchronizedCounters _counters)
: counters(_counters) {}
  void opIndexAssign(string word, string foreignWord)
  {
translations[word] = foreignWord;
counters.addUnconfirmed();
  }
  string opIndex(string word)
  {
shared string *found = word in translations;
if (found)
  return *found;
else
  return null;
  }
}







That being said, I've never used synchronized classes in my 
multithreaded D1/D2 code. I used Tango's Mutexes and 
Conditions. They're more flexible.


I'd say it's a good choice. How does it work with shared 
variables in D2, or are you just ignoring the type system?



I did not find my mutex based code, but did find a lock free 
broadcast queue.  It was written to be shared-aware.  Here's the 
receive class:

/// Receives delegates from the specified sender. Never blocks.
class receiver{
private target parent;
private shared sender source;
private int nextMessageId = 1;
this(target t, shared sender s){ parent = t; source = s; }
bool receive(){
if (source.receive(nextMessageId, parent)){
nextMessageId++;
return true;
}
return false;
}
}

And here was the receive method in the sender class:
private bool receive(int messageId, target t) shared
{
if (pending == 0 || id < messageId)
return false;
msg(t);
atomicDecrement!(msync.raw)(pending);
return true;
}

atomicDecrement in Tango was a template.  Looking at the commit 
logs, it looks like I had to hack at isValidNumericType, but I 
did not have to change the Atomic.d (beyond removing the volatile 
keywords and fixing incorrect assembly)


Re: Add compile time mutable variable type

2012-06-04 Thread Chang Long

On Tuesday, 5 June 2012 at 01:01:07 UTC, Chang Long wrote:
 I need a type can be change on compile time,  it can be 
immutable or mutable at run time.


For example :
---
abstract class Template_parameter_Base{
TypeInto ti;
string file ;
size_t  line l;
string name ;

   this(TypeInto ti, string file, size_t line, string name){
this.ti = ti ;
this.file = file ;
   this.line = line ;
   this.name = name ;
}
}

class Template_parameter(T) : Template_parameter_Base {
   T   value ;
   alias T this;
void assign(ref T t){
value = t ;
}
}

class Template {
 static 
compile_time_mutable(string[Template_parameter_Base])  
template_vars ;


	void assign(string name, string __file__ = __FILE__, size_t 
__line__ =__LINE__, T)(T t) {

static if( name in template_vars ){
static parameter = template_vars[name] ;
} else {
			static parameter = template_vars[name] =  new 
Template_parameter(typeid(T),  __file__, __line__ );

}
parameter.assign(t);
}

string render(){
		// use template_vars comiplate a dynamic link lib ,  and load 
it,  and put the template_vars to it, and run it ,
		// if  dynamic link lib  already exists,  or the templte file 
is not changed, just run it .

}
}

void main(){
static tpl = new Template;
auto  is_login  =  false ;
tpl.assign!"is_login"( is_login) ) ;
if( is_login ) {
tpl.assign!"user"( new User() ) ;
}
}

---

Why I need this is because I need to know all parameters at 
first time call Template.render,  but a lot parameter will not 
be call at run time. so I need a type can be changed at compile 
time.

the new type is just like mutable type at CTFE.



The previous two example is not correct, please see this one:





abstract class Template_parameter_Base{
TypeInto ti;
string file ;
size_t  line l;
string name ;

   this(TypeInto ti, string file, size_t line, string name){
this.ti = ti ;
this.file = file ;
this.line = line ;
this.name = name ;
}
}

class Template_parameter(T) : Template_parameter_Base {
   T   value ;
   alias T this;
void assign(ref T t){
value = t ;
}
}

class Template_Engine {
string name ;
Template_parameter_Base[string] parameters ;
void this(string name) {
this.name = name ;
}
}


class Template (string name) {

	static compile_time_mutable(Template_Engine)  engine = new 
Template_Engine(name) ;


	void assign(string name, string __file__ = __FILE__, size_t 
__line__ =__LINE__, T)(T t) {

static if( name in engine.parameters ){
			static compile_time_mutable(parameter) = 
engine.parameters[name] ;
			static assert( parameter.ti == typeid(T),  "assign var at " ~ 
__file__ ~ " line" ~ __line__ " with type " ~ T.stringof " ~ 
conflict with old parameter at file " ~ parameter.file  ~ " line 
" ~ parameter.line );

} else {
			static compile_time_mutable(parameter) = 
engine.parameters[name] =  new Template_parameter(typeid(T),  
__file__, __line__ , name );

}
// put the value to parameter at run time
parameter.assign(t);
}

string render(){
		// use the engine.parameters can know all parameter type 
information
		// use template_vars comiplate a dynamic link lib ,  and load 
it,  and put the template_vars to it, and run it ,
		// if  dynamic link lib  already exists,  or the templte file 
is not changed, just run it .

}
}

void main(){
static tpl = new Template!"home_page" ;
auto  is_login  =  false ;
tpl.assign!"is_login"( is_login) ) ;
if( is_login ) {
tpl.assign!"user"( new User() ) ;
}

auto string switch_user = "new_user_email" ;
if( switch_user !is null ) {
tpl.assign!"user"( new User(switch_user) ) ;
}

tpl.assign!"user"( true )  ; // static assert error
}



Re: Increment / Decrement Operator Behavior

2012-06-04 Thread Kevin Cox
On Jun 4, 2012 8:43 PM, "Xinok"  wrote:
>
> I wonder in that case, is it even worth including in the language? For me
anyways, the whole point of these operators is to use them in expressions.
Otherwise, why not simply write (i+=1)?

For pointers they are useful because they go up in units not bytes
(although addition often does too).


Re: Add compile time mutable variable type

2012-06-04 Thread Chang Long

On Tuesday, 5 June 2012 at 01:01:07 UTC, Chang Long wrote:
 I need a type can be change on compile time,  it can be 
immutable or mutable at run time.


For example :
---
abstract class Template_parameter_Base{
TypeInto ti;
string file ;
size_t  line l;
string name ;

   this(TypeInto ti, string file, size_t line, string name){
this.ti = ti ;
this.file = file ;
   this.line = line ;
   this.name = name ;
}
}

class Template_parameter(T) : Template_parameter_Base {
   T   value ;
   alias T this;
void assign(ref T t){
value = t ;
}
}

class Template {
 static 
compile_time_mutable(string[Template_parameter_Base])  
template_vars ;


	void assign(string name, string __file__ = __FILE__, size_t 
__line__ =__LINE__, T)(T t) {

static if( name in template_vars ){
static parameter = template_vars[name] ;
} else {
			static parameter = template_vars[name] =  new 
Template_parameter(typeid(T),  __file__, __line__ );

}
parameter.assign(t);
}

string render(){
		// use template_vars comiplate a dynamic link lib ,  and load 
it,  and put the template_vars to it, and run it ,
		// if  dynamic link lib  already exists,  or the templte file 
is not changed, just run it .

}
}

void main(){
static tpl = new Template;
auto  is_login  =  false ;
tpl.assign!"is_login"( is_login) ) ;
if( is_login ) {
tpl.assign!"user"( new User() ) ;
}
}

---

Why I need this is because I need to know all parameters at 
first time call Template.render,  but a lot parameter will not 
be call at run time. so I need a type can be changed at compile 
time.

the new type is just like mutable type at CTFE.


this example is more convincing:




abstract class Template_parameter_Base{
TypeInto ti;
string file ;
size_t  line l;
string name ;

   this(TypeInto ti, string file, size_t line, string name){
this.ti = ti ;
this.file = file ;
this.line = line ;
this.name = name ;
}
}

class Template_parameter(T) : Template_parameter_Base {
   T   value ;
   alias T this;
void assign(ref T t){
value = t ;
}
}

class Template {
 static 
compile_time_mutable(Template_parameter_Base[string])  
template_vars ;


	void assign(string name, string __file__ = __FILE__, size_t 
__line__ =__LINE__, T)(T t) {

static if( name in template_vars ){
static compile_time_mutable(parameter) = 
template_vars[name] ;
			static assert( parameter.ti == typeid(T),  "assign var at " ~ 
__file__ ~ " line" ~ __line__ " with type " ~ T.stringof " ~ 
conflict with old parameter at file " ~ parameter.file  ~ " line 
" ~ parameter.line );

} else {
			static compile_time_mutable(parameter) = template_vars[name] = 
 new Template_parameter(typeid(T),  __file__, __line__ , name );

}
// put the value to parameter at run time
parameter.assign(t);
}

string render(){
		// use template_vars comiplate a dynamic link lib ,  and load 
it,  and put the template_vars to it, and run it ,
		// if  dynamic link lib  already exists,  or the templte file 
is not changed, just run it .

}
}

void main(){
static tpl = new Template;
auto  is_login  =  false ;
tpl.assign!"is_login"( is_login) ) ;
if( is_login ) {
tpl.assign!"user"( new User() ) ;
}

auto string switch_user = "new_user_email" ;
if( switch_user !is null ) {
tpl.assign!"user"( new User(switch_user) ) ;
}

tpl.assign!"user"( true )  ; // static assert error
}



Add compile time mutable variable type

2012-06-04 Thread Chang Long
 I need a type can be change on compile time,  it can be 
immutable or mutable at run time.


For example :
---
abstract class Template_parameter_Base{
TypeInto ti;
string file ;
size_t  line l;
string name ;

   this(TypeInto ti, string file, size_t line, string name){
this.ti = ti ;
this.file = file ;
   this.line = line ;
   this.name = name ;
}
}

class Template_parameter(T) : Template_parameter_Base {
   T   value ;
   alias T this;
void assign(ref T t){
value = t ;
}
}

class Template {
 static 
compile_time_mutable(string[Template_parameter_Base])  
template_vars ;


	void assign(string name, string __file__ = __FILE__, size_t 
__line__ =__LINE__, T)(T t) {

static if( name in template_vars ){
static parameter = template_vars[name] ;
} else {
			static parameter = template_vars[name] =  new 
Template_parameter(typeid(T),  __file__, __line__ );

}
parameter.assign(t);
}

string render(){
		// use template_vars comiplate a dynamic link lib ,  and load 
it,  and put the template_vars to it, and run it ,
		// if  dynamic link lib  already exists,  or the templte file 
is not changed, just run it .

}
}

void main(){
static tpl = new Template;
auto  is_login  =  false ;
tpl.assign!"is_login"( is_login) ) ;
if( is_login ) {
tpl.assign!"user"( new User() ) ;
}
}

---

Why I need this is because I need to know all parameters at first 
time call Template.render,  but a lot parameter will not be call 
at run time. so I need a type can be changed at compile time.

the new type is just like mutable type at CTFE.





Re: Increment / Decrement Operator Behavior

2012-06-04 Thread Xinok

On Monday, 4 June 2012 at 20:44:42 UTC, bearophile wrote:
1) Make post/pre increments return void. This avoid those 
troubles. I think Go language has chosen this. This is my 
preferred solution.
I wonder in that case, is it even worth including in the 
language? For me anyways, the whole point of these operators is 
to use them in expressions. Otherwise, why not simply write 
(i+=1)?


Re: Increment / Decrement Operator Behavior

2012-06-04 Thread bearophile

Jonathan M Davis:

If they don't bother to learn, then they're going to get 
bitten, and that's life.


A modern language must try to avoid common programmer mistakes, 
where possible (like in this case).



As for treating pre or post-increment operators specially in 
some manner, that
doesn't make sense. The problem is far more general than that. 
If we're going
to change anything, it would be to make it so that the language 
itself defines
the order of evaluation of function arguments as being 
left-to-right.


Probably I have expressed myself badly there, sorry. I'd like to 
see function calls fixed as Walter has stated.
And regarding pre/post de/increment operators, I find them handy, 
but I have seen _so much_ C/C++ code that abuses them that maybe 
I'd like them to return void, as in Go.


Bye,
bearophile


Re: Increment / Decrement Operator Behavior

2012-06-04 Thread Jonathan M Davis
On Monday, June 04, 2012 23:22:26 Bernard Helyer wrote:
> On Monday, 4 June 2012 at 20:44:42 UTC, bearophile wrote:
> > Bernard Helyer:
> >> If you find yourself using postfix increment/decrement
> >> operators in the same function call in multiple arguments,
> >> slap yourself firmly in the face and refactor that code.
> > 
> > I think this is not acceptable, you can't rely on that, future
> > D programers will not slap themselves and refactor their code.
> > Some of the acceptable alternatives are:
> > 1) Make post/pre increments return void. This avoid those
> > troubles. I think Go language has chosen this. This is my
> > preferred solution.
> > 2) Turn that code into a syntax error for some other cause.
> > 3) Design the language so post/pre increments give a defined
> > effect on all D compilers on all CPUs. Walter since lot of time
> > says this is planned for D. This leads to deterministic
> > programs, but sometimes they are hard to understand and hard to
> > translate (port) to other languages any way. Translating code
> > to other languages is not irrelevant because D must be designed
> > to make it easy to understand the semantics of the code.
> > 
> > Bye,
> > bearophile
> 
> If people can't be bothered to understand what they write, they
> can go hang.

I think that Bernard is being a bit harsh, but in essence, I agree. Since the 
evaluation order of arguments is undefined, programmers should be aware of that 
and code accordingly. If they don't bother to learn, then they're going to get 
bitten, and that's life.

Now, Walter _has_ expressed interest in changing it so that the order of 
evaluation for function arguments is fully defined as being left-to-right, 
which solves the issue. I'd still council against getting into the habit of 
writing code which relies on the order of evaluation for the arguments to a 
function, since it's so common for other languages not to define it (so that 
the compiler can better optimize the calls), and so getting into the habit of 
writing code which _does_ depend on the order of evalution for function 
arguments will cause you to write bad code you when you work in most other
programming languages.

As for treating pre or post-increment operators specially in some manner, that 
doesn't make sense. The problem is far more general than that. If we're going 
to change anything, it would be to make it so that the language itself defines 
the order of evaluation of function arguments as being left-to-right.

- Jonathan M Davis


Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 22:21:42 UTC, Dmitry Olshansky wrote:
And you can fake immutability, buy always picking up an unused 
slot btw. No need to go beyond logical immutability.
That's applicable in some cases, not in general. But I agree that 
often it is possible to optimize if use cases are known.


My example concern was about fundamental problem of range APIs 
for immutable data structures, which is not possible to emulate: 
popFront is mutating by design.


Re: More synchronized ideas

2012-06-04 Thread Michel Fortin
On 2012-06-04 18:22:22 +, "Steven Schveighoffer" 
 said:


On Mon, 04 Jun 2012 07:17:45 -0400, Michel Fortin  
 wrote:


After trying to make sense of the thread "synchronized  
(this[.classinfo]) in druntime and phobos", I had to write my opinion 
on  all this somewhere that wouldn't be instantly lost in a bazilion of 
 posts. It turned out into something quite elaborate.





I like this.  But it needs a lot of work.

A few comments:

1. This does not handle shared *at all*.  Presumably, there is no 
reason  to lock unshared data, so this has to be handled somewhere.  If 
you say  "synchronized implies shared", well, then how do you have a 
shared int  inside an unshared class? My instinct is that all the 
methods that need  to used synchronized need to be declared shared 
(meaning the whole class  data is shared).  But that sucks, because 
what if you have a thread-local  instance?


To the type system, the synchronized variable is thread-local with one 
restriction: you can only access it inside a synchronized block. So 
without the synchronized block you can't read or write to it, and you 
can't take its address. Inside the synchronized block, any expression 
making use of that variable is tainted by the current scope and must be 
pure (weakly). Except if the only result of an expression contains no 
indirection (is entirely copied), or is immutable or shared (no 
synchronization needed), then the expression does not get tainted and 
its result can be sent anywhere.


Note that this taint thing is only done locally inside the synchronized 
block: called functions are simply treated as an expression with inputs 
and outputs to check whether they are tainted or not.



I have an idea to solve this.  Since the mutexes are implicit, we can  
declare space for them, but only allocate them when the class instance 
is  shared (allocated on construction).  Then when synchronized goes to 
lock  them, if they are null, do nothing.


Or they could simply work like the monitors objects currently have.

As I wrote, I think we need support for shared mutexes too (integrated 
with const if you will). Ideally, there'd be a way to choose your own 
mutex implementations, perhaps with "synchronized(Spinlock) int x".




But what if some data is not marked synchronized?


You might actually want to restrict synchronized variables to shared 
classes and shared structs. In that case, variables not marked as 
synchronized will be shared and accessible using atomic operations.



I can see why Bartosz had such trouble creating a sharing system in a  
simple manner...


:-)


2. As far as determining a mutex to protect multiple items of data, 
what  about:


synchronized(symbolname) int x, int y;

or

synchronized(symbolname)
{
int x;
int y;
}

where you cannot do synchronized(x) or synchronized(y), and cannot read 
or  write x or y without doing synchronized(symbolname).


Or we could be less original: use a struct. It's just a minor cosmetic 
problem actually.


struct XY { int x, y; }
synchronized XY xy;


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 22:06:49 UTC, Dmitry Olshansky wrote:

On 05.06.2012 1:56, Roman D. Boiko wrote:
Will it be difficult to adapt your API for immutable tries? 
E.g., it is
not possible to implement immutable sequence (linked list) as 
a range,


Linked list? I'm horrified. Though I'd need some info on where 
and why you'd need that ;)
Despite some claims to the contrary small arrays (like e-hm 
10_000 elements) are faster in nearly all operations possible.
That was for illustration only, as the most trivial example of 
immutable data structure (and probably the most widely used 
structure in functional programming).


so range API doesn't fit that (but could with a tweak - 
returning tail

instead of mutating in popFront). If trie API will have similar
problems, then I need to invent my own. I understand that 
immutability

is not your priority for GSoC, though.


Well I might remove obstacles, if you outline your design more 
clearly.
OK, thanks! I'll go through your code first to understand it 
better. But even before that I need to finish an important 
support request from my past customer...


Re: Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky

On 05.06.2012 2:06, Dmitry Olshansky wrote:

On 05.06.2012 1:56, Roman D. Boiko wrote:

On Monday, 4 June 2012 at 21:39:50 UTC, Dmitry Olshansky wrote:

On 05.06.2012 1:16, Roman D. Boiko wrote:
I believe that once encoidng is established your compiler tool should
use the best code for that encoding. And that means templates,
tailored per encoding in my book.
If anything I plan to use Tries on strings without decoding codepoint,
just using length of it (as first stage, might need some tweak).

Will it be difficult to adapt your API for immutable tries? E.g., it is
not possible to implement immutable sequence (linked list) as a range,


Linked list? I'm horrified. Though I'd need some info on where and why
you'd need that ;)
Despite some claims to the contrary small arrays (like e-hm 10_000
elements) are faster in nearly all operations possible.


And you can fake immutability, buy always picking up an unused slot btw. 
No need to go beyond logical immutability.



--
Dmitry Olshansky


Re: Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky

On 05.06.2012 1:56, Roman D. Boiko wrote:

On Monday, 4 June 2012 at 21:39:50 UTC, Dmitry Olshansky wrote:

On 05.06.2012 1:16, Roman D. Boiko wrote:
I believe that once encoidng is established your compiler tool should
use the best code for that encoding. And that means templates,
tailored per encoding in my book.
If anything I plan to use Tries on strings without decoding codepoint,
just using length of it (as first stage, might need some tweak).

Will it be difficult to adapt your API for immutable tries? E.g., it is
not possible to implement immutable sequence (linked list) as a range,


Linked list? I'm horrified. Though I'd need some info on where and why 
you'd need that ;)
Despite some claims to the contrary small arrays (like e-hm 10_000 
elements) are faster in nearly all operations possible.



so range API doesn't fit that (but could with a tweak - returning tail
instead of mutating in popFront). If trie API will have similar
problems, then I need to invent my own. I understand that immutability
is not your priority for GSoC, though.


Well I might remove obstacles, if you outline your design more clearly.

--
Dmitry Olshansky


Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 21:39:50 UTC, Dmitry Olshansky wrote:

On 05.06.2012 1:16, Roman D. Boiko wrote:
I believe that once encoidng is established your compiler tool 
should use the best code for that encoding. And that means 
templates,  tailored per encoding in my book.
If anything I plan to use Tries on strings without decoding 
codepoint, just using length of it (as first stage, might need 
some tweak).
Will it be difficult to adapt your API for immutable tries? E.g., 
it is not possible to implement immutable sequence (linked list) 
as a range, so range API doesn't fit that (but could with a tweak 
- returning tail instead of mutating in popFront). If trie API 
will have similar problems, then I need to invent my own. I 
understand that immutability is not your priority for GSoC, 
though.


Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 21:39:50 UTC, Dmitry Olshansky wrote:
And before you run away with that horrible idea of ever 
decoding UTF in lexer... Just don't do that. Trust me, it's not 
as small a price as it seems at first. At least keep it only at 
prototype stage as it simplifies things.
I didn't plan to convert input into some other encoding. But 
missed the idea that it is possible to create finite automata as 
a template and avoid decoding altogether. IIRC, I rejected this 
approach when decided to convert everything into UTF-8 long ago, 
and didn't reconsider after discarding that idea after your 
previous suggestion to avoid converting. Thus your idea was used 
only partially, and now I wonder how did I not discover this 
myself! :)


Re: Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky

On 05.06.2012 1:16, Roman D. Boiko wrote:

On Monday, 4 June 2012 at 21:07:02 UTC, Roman D. Boiko wrote:

For example, one by
one would allow ignoring key encoding (and thus using multiple
encodings
simultaneously just as easily as single).


It's just as easy with the whole thing. Treat it as bytes ;)

Except when equivalent keys in different encodings should be treated
as equal.


I believe that once encoidng is established your compiler tool should 
use the best code for that encoding. And that means templates,  tailored 
per encoding in my book.
If anything I plan to use Tries on strings without decoding codepoint, 
just using length of it (as first stage, might need some tweak).


>> But now I can see that my counter-example is only partially

valid - walklength could be used instead of length (more expensive,
though), and dchars everywhere.

Another counter-example is searching for strings with specified prefix.
One-by-one fits better here. I didn't understand whether such use cases
are supported at both API and implementation levels.


They are not... *yawn*. Okay, I'll make it support InputRange of 
typeof(Key.init[0]) along with specific key type iff key type is 
RandomAccessRange :) It will not work however with SetAsSlot and MapAsSlot.


And before you run away with that horrible idea of ever decoding UTF in 
lexer... Just don't do that. Trust me, it's not as small a price as it 
seems at first. At least keep it only at prototype stage as it 
simplifies things.


--
Dmitry Olshansky


Re: opCaret to complement opDollar when specifying slices

2012-06-04 Thread Xinok

On Saturday, 2 June 2012 at 11:49:17 UTC, Dario Schiavon wrote:

Hi,

I just read some old threads about opDollar and the wish to 
have it work for non zero-based arrays, arrays with gaps, 
associative arrays with non-numerical indices, and so on. It 
was suggested to define opDollar as the end of the array rather 
than the length (and perhaps rename opDollar to opEnd to 
reflect this interpretation), so that collection[someIndex .. 
$] would consistently refer to a slice from someIndex to the 
end of the collection (of course the keys must have a defined 
ordering for it to make sense).


I'm just thinking, if we want to generalize slices for those 
cases, shouldn't we have a symmetrical operator for the first 
element of the array? Since the $ sign was evidently chosen to 
parallel the regexp syntax, why don't we add ^ to refer to the 
first element? This way, collection[^ .. $] would slice the 
entire collection, just like collection[].


Until now, ^ is only used as a binary operator, so this 
addition shouldn't lead to ambiguous syntax. It surely wouldn't 
be used as often as the opDollar, so I understand if you oppose 
the idea, but it would at least make the language a little more 
"complete".


The problem I see with this, it would be a larger burden when 
writing generic code. Libraries would have to be written to 
compensate for those containers. I'd prefer that all containers 
are simply zero-based, unless there's a need for negative indices 
(i.e. pointers). I think random-access ranges may be intended to 
be zero-based as well.


Re: Increment / Decrement Operator Behavior

2012-06-04 Thread Bernard Helyer

On Monday, 4 June 2012 at 20:44:42 UTC, bearophile wrote:

Bernard Helyer:

If you find yourself using postfix increment/decrement 
operators in the same function call in multiple arguments, 
slap yourself firmly in the face and refactor that code.


I think this is not acceptable, you can't rely on that, future 
D programers will not slap themselves and refactor their code. 
Some of the acceptable alternatives are:
1) Make post/pre increments return void. This avoid those 
troubles. I think Go language has chosen this. This is my 
preferred solution.

2) Turn that code into a syntax error for some other cause.
3) Design the language so post/pre increments give a defined 
effect on all D compilers on all CPUs. Walter since lot of time 
says this is planned for D. This leads to deterministic 
programs, but sometimes they are hard to understand and hard to 
translate (port) to other languages any way. Translating code 
to other languages is not irrelevant because D must be designed 
to make it easy to understand the semantics of the code.


Bye,
bearophile


If people can't be bothered to understand what they write, they 
can go hang.





Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 21:07:02 UTC, Roman D. Boiko wrote:

For example, one by
one would allow ignoring key encoding (and thus using 
multiple encodings

simultaneously just as easily as single).


It's just as easy with the whole thing. Treat it as bytes ;)
Except when equivalent keys in different encodings should be 
treated as equal. But now I can see that my counter-example is 
only partially valid - walklength could be used instead of 
length (more expensive, though), and dchars everywhere.
Another counter-example is searching for strings with specified 
prefix. One-by-one fits better here. I didn't understand whether 
such use cases are supported at both API and implementation 
levels.


Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 20:40:03 UTC, Dmitry Olshansky wrote:

[snip]
Sorry my Trie implementation focused on "constructe once - read 
everywhere" case.
My cases will likely have quite low amortized number of reads per 
insert / delete.


How do you handle situations when not existent, etc., is 
needed?


Easy say you have 2-level 4 entry each level (for sake of 
simplicity), say the final value is int.


Then in highly redundant or just when there is little amount of 
values in initial data set (both cases are equivalent, thus 
tries are easily invertible btw):


LVL 0: [0, 1, 0, 0]
This first one always occupies full size (it's asserted that 
there is no index >= 4)

LVL 1: [0, 0, 0, 0] [1, 0, 2, 3]
Note again - only full pages, no cheating and half-pages, but 
we can save on the amount of them (note almost obligatory zero 
page)


So it contains only 1, 2 and 3 at indexes of 4, 6, 7 
respectively, T.init is our way of not EXISTS ... yes, I think 
that should be user definable.
This way there is no checks anywhere, only shift, add 
dereference, shift, add, dereference...

Smart


For example, one by
one would allow ignoring key encoding (and thus using multiple 
encodings

simultaneously just as easily as single).


It's just as easy with the whole thing. Treat it as bytes ;)
Except when equivalent keys in different encodings should be 
treated as equal. But now I can see that my counter-example is 
only partially valid - walklength could be used instead of length 
(more expensive, though), and dchars everywhere.



I guess making my own mistakes is necessary anyway.


It could enlightening just don't give up too quickly and don't 
jump to conclusions. In fact try to be sympathetic with 
"loosing party", like in ... "hm this way is much slower, so 
bad - I have to optimize it somehow". In other words make sure 
you squeezed all you can from "slow" method.
This deserves quoting somewhere :) Thanks a lot and have a good 
night! (It's late in Russia, isn't it?)


Re: opCaret to complement opDollar when specifying slices

2012-06-04 Thread Steven Schveighoffer

On Mon, 04 Jun 2012 16:32:55 -0400, Roman D. Boiko  wrote:


On Monday, 4 June 2012 at 20:26:52 UTC, Steven Schveighoffer wrote:

On Mon, 04 Jun 2012 16:13:49 -0400, Mehrdad

Can you use "null"?


Hm... now that null has its own type, I likely could.

I suppose that would map properly to 0.

-Steve
But if the key is non-nullable this might be confusing, or even not  
possible.


Well, you could say that null keys are not allowed.  But then it makes no  
sense for null not to work in other places.


I really am not sure this works well, I think it would be too confusing.

map[null] = 5; // set the first element to 5?

map[null..4] = 6; // set all the elements with keys before 4 to 6

map!keyType k = null;
map[k..$]; // likely an error.

I really actually think I like using map.begin better than null...

-Steve


Re: Increment / Decrement Operator Behavior

2012-06-04 Thread bearophile

Bernard Helyer:

If you find yourself using postfix increment/decrement 
operators in the same function call in multiple arguments, slap 
yourself firmly in the face and refactor that code.


I think this is not acceptable, you can't rely on that, future D 
programers will not slap themselves and refactor their code. Some 
of the acceptable alternatives are:
1) Make post/pre increments return void. This avoid those 
troubles. I think Go language has chosen this. This is my 
preferred solution.

2) Turn that code into a syntax error for some other cause.
3) Design the language so post/pre increments give a defined 
effect on all D compilers on all CPUs. Walter since lot of time 
says this is planned for D. This leads to deterministic programs, 
but sometimes they are hard to understand and hard to translate 
(port) to other languages any way. Translating code to other 
languages is not irrelevant because D must be designed to make it 
easy to understand the semantics of the code.


Bye,
bearophile


Re: Increment / Decrement Operator Behavior

2012-06-04 Thread Xinok

On Monday, 4 June 2012 at 20:08:57 UTC, simendsjo wrote:

Oh, and what should writeln(i++, ++i, ++i, i++) do?

It is messy whatever the logic implementation.


For prefix operators, it would be logical to perform the action 
before the statement, such as the code would be rewritten as:


++i
++i
writeln(i, i, i, i)
i++
i++

However, I already stated that it wouldn't work for prefix 
operators. Take this statement:


++foo(++i)

There's no way to increment the return value of foo without 
calling foo first. This "logic" would only work for the postfix 
operators.


I came up with the idea after refactoring this code:
https://github.com/Xinok/XSort/blob/master/timsort.d#L111

Each call to mergeAt is followed by --stackLen. I could have used 
stackLen-- in the mergeAt statement instead, but I didn't want to 
rely on operator precedence for the correct behavior.


Re: Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky

[snip]

Aye, the good thing about them - the amount of data affected by any
change is localized to one "page". So you just copy it over and remap
indices/pointers.

Actually, Microsoft went this way (immutable hash tables) in their
Roslyn preview implementation. However, I still believe that tries will
work better here. Will check...

Would bulk pre-allocation of memory to be used by trie improve locality?
With some heuristics for copying when it goes out of control.



Sorry my Trie implementation focused on "constructe once - read 
everywhere" case. Like I noted mutation is not hard but it's easy to 
blow up size if used unknowingly.

I think along the lines of:

auto trie = ...;//create it
for(...)
{
trie[x] = ...;//modify repeatedly
}
trie.compact();//redo same algorithm as during construction, O(N^2)


It is difficult to create a good API for fundamental data structures,
because various use cases would motivate different trade-offs. The same
is true for implementation. This is why I like your decision to
introduce policies for configuration. Rationale and use cases should
help to analyze design of your API and implementation, thus you will get
better community feedback :)


Well I guess I'll talk in depth about them in the article, as the
material exceed sane limits of a single NG post.

In brief:
- multiple levels are stored in one memory chunk one after another
thus helping a bit with cache-locality (first level goes first)
- constructors do minimize number of "pages" on each level by
constructing it outwards from the last level and checking duplicates
(costs ~ O(N^2) though, IRC)

So this price is paid only on construction, right? Are there
alternatives to chose from (when needed)? If yes, which?


See above. Price is paid every time you want squeeze it in as little 
memory as possible by removing duplicate pages.



- I learned the hard way not to introduce extra conditionals anywhere,
so there is no "out of range, max index, not existent" crap, in all
cases it's clean-cut memory access. Extra bits lost on having at least
one "default" page per level can be saved by going extra level

Could you please elaborate? How do you handle situations when not
existent, etc., is needed?

Easy say you have 2-level 4 entry each level (for sake of simplicity), 
say the final value is int.


Then in highly redundant or just when there is little amount of values 
in initial data set (both cases are equivalent, thus tries are easily 
invertible btw):


LVL 0: [0, 1, 0, 0]
This first one always occupies full size (it's asserted that there is no 
index >= 4)

LVL 1: [0, 0, 0, 0] [1, 0, 2, 3]
Note again - only full pages, no cheating and half-pages, but we can 
save on the amount of them (note almost obligatory zero page)


So it contains only 1, 2 and 3 at indexes of 4, 6, 7 respectively, 
T.init is our way of not EXISTS ... yes, I think that should be user 
definable.
This way there is no checks anywhere, only shift, add dereference, 
shift, add, dereference...



I would say that one by one won't help you much since the speed is
almost the same if not worse.

I guess, in general your statement is true, especially because known
length could improve speed significantly. Not sure (but can easily
believe) that in my particular situation it is true.


That and some simple minded hashing of say first char and the length ;)
In any case you need to munch the whole symbol, I think.


For example, one by
one would allow ignoring key encoding (and thus using multiple encodings
simultaneously just as easily as single).


It's just as easy with the whole thing. Treat it as bytes ;)




The problematic thing with one by one - say you want to stop early,
right?

Why? I'd like to lex inout as TokenKind.InOut, not TokenKind.In followed
by TokenKind.Out. Did I misunderstand your question?



No I meant stopping on say 2 level of 5 level Trie because the element 
was not found. Stupid idea generally.



Branching and testing are things that kill speed advantage of Tries,
the ones I overlooked in my previous attempt, see std/internal/uni.d.
The other being separate locations for data and index, pointer-happy
disjoint node(page) locality is another way of the same fault.

This concern disturbs me for some time already, and slightly
demotivates, because implementing something this way will likely lead to
a failure. I don't have enough experience with alternatives to know
their advantages and trade-offs. I'll check your code. I did plan to try
table lookup instead of branching. I guess making my own mistakes is
necessary anyway.


It could enlightening just don't give up too quickly and don't jump to 
conclusions. In fact try to be sympathetic with "loosing party", like in 
... "hm this way is much slower, so bad - I have to optimize it 
somehow". In other words make sure you squeezed all you can from "slow" 
method.



Nope, the days of linear lookup for switch are over (were there even
such days?) compiler always do binary se

Re: opCaret to complement opDollar when specifying slices

2012-06-04 Thread Roman D. Boiko
On Monday, 4 June 2012 at 20:26:52 UTC, Steven Schveighoffer 
wrote:

On Mon, 04 Jun 2012 16:13:49 -0400, Mehrdad

Can you use "null"?


Hm... now that null has its own type, I likely could.

I suppose that would map properly to 0.

-Steve
But if the key is non-nullable this might be confusing, or even 
not possible.




Re: opCaret to complement opDollar when specifying slices

2012-06-04 Thread Steven Schveighoffer

On Mon, 04 Jun 2012 16:13:49 -0400, Mehrdad  wrote:


On Monday, 4 June 2012 at 19:55:49 UTC, Steven Schveighoffer wrote:
On Sat, 02 Jun 2012 07:49:16 -0400, Dario Schiavon  
 wrote:



Hi,

I just read some old threads about opDollar and the wish to have it  
work for non zero-based arrays, arrays with gaps, associative arrays  
with non-numerical indices, and so on. It was suggested to define  
opDollar as the end of the array rather than the length (and perhaps  
rename opDollar to opEnd to reflect this interpretation), so that  
collection[someIndex .. $] would consistently refer to a slice from  
someIndex to the end of the collection (of course the keys must have a  
defined ordering for it to make sense).


I'm just thinking, if we want to generalize slices for those cases,  
shouldn't we have a symmetrical operator for the first element of the  
array? Since the $ sign was evidently chosen to parallel the regexp  
syntax, why don't we add ^ to refer to the first element? This way,  
collection[^ .. $] would slice the entire collection, just like  
collection[].


Until now, ^ is only used as a binary operator, so this addition  
shouldn't lead to ambiguous syntax. It surely wouldn't be used as  
often as the opDollar, so I understand if you oppose the idea, but it  
would at least make the language a little more "complete".


I suggested this, and it was shot down rather pointedly by Walter (with  
not very convincing arguments I might add).  Probably not much chance  
of success.


http://forum.dlang.org/post/op.vco5zwhreav7ka@localhost.localdomain


Can you use "null"?


Hm... now that null has its own type, I likely could.

I suppose that would map properly to 0.

-Steve


Re: opCaret to complement opDollar when specifying slices

2012-06-04 Thread Mehrdad
On Monday, 4 June 2012 at 19:55:49 UTC, Steven Schveighoffer 
wrote:
On Sat, 02 Jun 2012 07:49:16 -0400, Dario Schiavon 
 wrote:



Hi,

I just read some old threads about opDollar and the wish to 
have it work for non zero-based arrays, arrays with gaps, 
associative arrays with non-numerical indices, and so on. It 
was suggested to define opDollar as the end of the array 
rather than the length (and perhaps rename opDollar to opEnd 
to reflect this interpretation), so that collection[someIndex 
.. $] would consistently refer to a slice from someIndex to 
the end of the collection (of course the keys must have a 
defined ordering for it to make sense).


I'm just thinking, if we want to generalize slices for those 
cases, shouldn't we have a symmetrical operator for the first 
element of the array? Since the $ sign was evidently chosen to 
parallel the regexp syntax, why don't we add ^ to refer to the 
first element? This way, collection[^ .. $] would slice the 
entire collection, just like collection[].


Until now, ^ is only used as a binary operator, so this 
addition shouldn't lead to ambiguous syntax. It surely 
wouldn't be used as often as the opDollar, so I understand if 
you oppose the idea, but it would at least make the language a 
little more "complete".


I suggested this, and it was shot down rather pointedly by 
Walter (with not very convincing arguments I might add).  
Probably not much chance of success.


http://forum.dlang.org/post/op.vco5zwhreav7ka@localhost.localdomain

-Steve


Can you use "null"?


Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 19:18:32 UTC, Dmitry Olshansky wrote:

On 04.06.2012 22:42, Roman D. Boiko wrote:

... it is possible to create persistent (immutable) tries
with efficient (log N) inserting / deleting (this scenario is 
very important for my DCT project). Immutable hash tables 
would require 0(N) copying for each insert / delete.


Aye, the good thing about them - the amount of data affected by 
any change is localized to one "page". So you just copy it over 
and remap indices/pointers.
Actually, Microsoft went this way (immutable hash tables) in 
their Roslyn preview implementation. However, I still believe 
that tries will work better here. Will check...


Would bulk pre-allocation of memory to be used by trie improve 
locality? With some heuristics for copying when it goes out of 
control.


It is difficult to create a good API for fundamental data 
structures,
because various use cases would motivate different trade-offs. 
The same

is true for implementation. This is why I like your decision to
introduce policies for configuration. Rationale and use cases 
should
help to analyze design of your API and implementation, thus 
you will get

better community feedback :)


Well I guess I'll talk in depth about them in the article, as 
the material exceed sane limits of a single NG post.


In brief:
	- multiple levels are stored in one memory chunk one after 
another thus helping a bit with cache-locality (first level 
goes first)
	- constructors do minimize number of "pages" on each level by 
constructing it outwards from the last level and checking 
duplicates (costs ~ O(N^2) though, IRC)
So this price is paid only on construction, right? Are there 
alternatives to chose from (when needed)? If yes, which?


	- I learned the hard way not to introduce extra conditionals 
anywhere, so there is no "out of range, max index, not 
existent" crap, in all cases it's clean-cut memory access. 
Extra bits lost on having at least one "default" page per level 
can be saved by going extra level
Could you please elaborate? How do you handle situations when not 
existent, etc., is needed?


Your examples deal with lookup by the whole word (first/last 
characters and length are needed). Are your API and 
implementation adaptable for character-by-character trie 
lookup?


I would say that one by one won't help you much since the speed 
is almost the same if not worse.
I guess, in general your statement is true, especially because 
known length could improve speed significantly. Not sure (but can 
easily believe) that in my particular situation it is true. For 
example, one by one would allow ignoring key encoding (and thus 
using multiple encodings simultaneously just as easily as single).


The problematic thing with one by one - say you want to stop 
early, right?
Why? I'd like to lex inout as TokenKind.InOut, not TokenKind.In 
followed by TokenKind.Out. Did I misunderstand your question?


Now you have to check the *NOT FOUND* case, and that implies 
extra branching (if(...)) on _each level_ and maybe reusing 
certain valid values as "not found" marker (extra 
configuration?).
This should be easy, if something is not a keyword, it is likely 
an identifier. But I agree in general, and probably even in my 
case.


Branching and testing are things that kill speed advantage of 
Tries, the ones I overlooked in my previous attempt, see 
std/internal/uni.d.
The other being separate locations for data and index, 
pointer-happy disjoint node(page) locality is another way of 
the same fault.
This concern disturbs me for some time already, and slightly 
demotivates, because implementing something this way will likely 
lead to a failure. I don't have enough experience with 
alternatives to know their advantages and trade-offs. I'll check 
your code. I did plan to try table lookup instead of branching. I 
guess making my own mistakes is necessary anyway.


Will compile-time generation of lookup code based on tries be 
supported?
Example which is currently in DCT (first implemented by Brian 
Schott in
his Dscanner project) uses switch statements (which means 
lookup linear

in number of possible characters at each position).


Nope, the days of linear lookup for switch are over (were there 
even such days?) compiler always do binary search nowadays if 
linear isn't more efficient e.g. for a small number of 
values.(it even weight out which is better and uses a 
combination of them)
I thought this *might* be the case, but didn't know nor checked 
anywhere. I also wanted to do linear search for some empirically 
chosen small number of items.


However you'd better check asm code afterwards. Compiler is 
like a nasty stepchild it will give up on generating good old 
jump tables given any reason it finds justifiable. (but it may 
use few small jump tables + binary search, could be fine... if 
not in a tight loop!)

Thanks.


A trivial
improvement might be using if statements and binary lookup. 
(E.g., if
there are 26 possible ch

Re: Increment / Decrement Operator Behavior

2012-06-04 Thread simendsjo

On Mon, 04 Jun 2012 20:57:11 +0200, simendsjo  wrote:


On Mon, 04 Jun 2012 20:36:14 +0200, Xinok  wrote:

The increment and decrement operators are highly dependent on operator  
precedence and associativity. If the actions are performed in a  
different order than the developer presumed, it could cause unexpected  
behavior.


I had a simple idea to change the behavior of this operator. It works  
for the postfix operators but not prefix. Take the following code:


size_t i = 5;
writeln(i--, i--, i--);

As of now, this writes "543". With my idea, instead it would write,  
"555". Under the hood, the compiler would rewrite the code as:


size_t i = 5;
writeln(i, i, i);
--i;
--i;
--i;

It decrements the variable after the current statement. While not the  
norm, this behavior is at least predictable. For non-static variables,  
such as array elements, the compiler could store a temporary reference  
to the variable so it can decrement it afterwards.


I'm not actually proposing we actually make this change. I simply  
thought it was a nifty idea worth sharing.


If I ever saw a construct like that, I would certainly test how that  
works, then rewrite it.
I wouldn't find it natural with the new behavior either. I would expect  
"543" or "345".
How often do you come across code like that? I think it's an  
anti-pattern, and shouldn't be encouraged even if it was easier to  
understand.


Oh, and what should writeln(i++, ++i, ++i, i++) do?

It is messy whatever the logic implementation.


Re: opCaret to complement opDollar when specifying slices

2012-06-04 Thread Steven Schveighoffer
On Sat, 02 Jun 2012 07:49:16 -0400, Dario Schiavon  
 wrote:



Hi,

I just read some old threads about opDollar and the wish to have it work  
for non zero-based arrays, arrays with gaps, associative arrays with  
non-numerical indices, and so on. It was suggested to define opDollar as  
the end of the array rather than the length (and perhaps rename opDollar  
to opEnd to reflect this interpretation), so that collection[someIndex  
.. $] would consistently refer to a slice from someIndex to the end of  
the collection (of course the keys must have a defined ordering for it  
to make sense).


I'm just thinking, if we want to generalize slices for those cases,  
shouldn't we have a symmetrical operator for the first element of the  
array? Since the $ sign was evidently chosen to parallel the regexp  
syntax, why don't we add ^ to refer to the first element? This way,  
collection[^ .. $] would slice the entire collection, just like  
collection[].


Until now, ^ is only used as a binary operator, so this addition  
shouldn't lead to ambiguous syntax. It surely wouldn't be used as often  
as the opDollar, so I understand if you oppose the idea, but it would at  
least make the language a little more "complete".


I suggested this, and it was shot down rather pointedly by Walter (with  
not very convincing arguments I might add).  Probably not much chance of  
success.


http://forum.dlang.org/post/op.vco5zwhreav7ka@localhost.localdomain

-Steve


Re: Increment / Decrement Operator Behavior

2012-06-04 Thread Bernard Helyer
If you find yourself using postfix increment/decrement operators 
in the same function call in multiple arguments, slap yourself 
firmly in the face and refactor that code.




Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Era Scarecrow

On Monday, 4 June 2012 at 18:06:56 UTC, Denis Shelomovskij wrote:
How can we all answer the same question in the same time 
without synchronization? Looks like it's a magical time...


 Or everyone could be replying at the exact same time (the time 
typing the reply doesn't count towards the time); That's it! Spam 
the forum! j/k


 This would have been funnier if this was within a minute of your 
post :P


Re: runtime hook for Crash on Error

2012-06-04 Thread Steven Schveighoffer

On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston  wrote:


1. There exist cases where you cannot know why the assert failed.
2. Therefore you never know why an assert failed.
3. Therefore it is not safe to unwind the stack from a nothrow function.

Spot the fallacies.

The fallacy in moving from 2 to 3 is more serious than the one from 1 to  
2: this argument is not in any way dependent on the assert occuring in a  
nothrow function. Rather, it's an argument for not having AssertError at  
all.


I'm not sure that is the issue here at all.  What I see is that the  
unwinding of the stack is optional, based on the assumption that there's  
no "right" answer.


However, there is an underlying driver for not unwinding the stack --  
nothrow.  If nothrow results in the compiler optimizing out whatever hooks  
a function needs to properly unwind itself (my limited understanding is  
that this helps performance), then there *is no choice*, you can't  
properly unwind the stack.


-Steve


Re: Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky

On 04.06.2012 22:42, Roman D. Boiko wrote:

On Monday, 4 June 2012 at 15:35:31 UTC, Dmitry Olshansky wrote:

On 04.06.2012 18:19, Andrei Alexandrescu wrote:

On 6/4/12 4:46 AM, Dmitry Olshansky wrote:

Cross-posting from my GSOC list as I think we had a lack of D rox posts
lately :)

[snip]

I think it would be great if you converted this post into an article.

Andrei



Sounds good, will do once I fix few issues that were mentioned
(bit-packing, GC types etc.)


Would be interesting to see some more examples, along with
rationale/motivation for various aspects of your API, and possible usage
scenarios.

Tries are fundamental (both data structures and respective algorithms)
for lookup problems, in the same way as arrays are fundamental for
indexed access.

For example, they have several advantages over hash tables. Hash
calculation requires const * key.length operations, which is equivalent
to the number of comparisons needed for trie lookup. But hash tables may
be less space efficient, or lead to hash collisions which increase
lookup time. Also, it is possible to create persistent (immutable) tries
with efficient (log N) inserting / deleting (this scenario is very
important for my DCT project). Immutable hash tables would require 0(N)
copying for each insert / delete.


Aye, the good thing about them - the amount of data affected by any 
change is localized to one "page". So you just copy it over and remap 
indices/pointers.




It is difficult to create a good API for fundamental data structures,
because various use cases would motivate different trade-offs. The same
is true for implementation. This is why I like your decision to
introduce policies for configuration. Rationale and use cases should
help to analyze design of your API and implementation, thus you will get
better community feedback :)


Well I guess I'll talk in depth about them in the article, as the 
material exceed sane limits of a single NG post.


In brief:
	- multiple levels are stored in one memory chunk one after another thus 
helping a bit with cache-locality (first level goes first)
	- constructors do minimize number of "pages" on each level by 
constructing it outwards from the last level and checking duplicates 
(costs ~ O(N^2) though, IRC)
	- I learned the hard way not to introduce extra conditionals anywhere, 
so there is no "out of range, max index, not existent" crap, in all 
cases it's clean-cut memory access. Extra bits lost on having at least 
one "default" page per level can be saved by going extra level

- extra levels ain't that bad as they look, since memory is close 
anyway.



Below are some notes related to my DCT use cases.

Your examples deal with lookup by the whole word (first/last characters
and length are needed). Are your API and implementation adaptable for
character-by-character trie lookup?


I would say that one by one won't help you much since the speed is 
almost the same if not worse.

The problematic thing with one by one - say you want to stop early, right?
Now you have to check the *NOT FOUND* case, and that implies extra 
branching (if(...)) on _each level_ and maybe reusing certain valid 
values as "not found" marker (extra configuration?).


Branching and testing are things that kill speed advantage of Tries, the 
ones I overlooked in my previous attempt, see std/internal/uni.d.
The other being separate locations for data and index, pointer-happy 
disjoint node(page) locality is another way of the same fault.




Will compile-time generation of lookup code based on tries be supported?
Example which is currently in DCT (first implemented by Brian Schott in
his Dscanner project) uses switch statements (which means lookup linear
in number of possible characters at each position).


Nope, the days of linear lookup for switch are over (were there even 
such days?) compiler always do binary search nowadays if linear isn't 
more efficient e.g. for a small number of values.(it even weight out 
which is better and uses a combination of them)


However you'd better check asm code afterwards. Compiler is like a nasty 
stepchild it will give up on generating good old jump tables given any 
reason it finds justifiable. (but it may use few small jump tables + 
binary search, could be fine... if not in a tight loop!)


A trivial

improvement might be using if statements and binary lookup. (E.g., if
there are 26 possible characters used at some position, use only 5
comparisons, not 26).



Moreover you'd be surprised but such leap-frog binary search looses by a 
big margin to _multilayer_ lookup table. (I for one was quite shocked 
back then)



I wanted to analyse your regex implementation, but that's not an easy
task and requires a lot of effort...


Yeah, sorry for some encrypted Klingon here and there ;)


It looks like the most promising
alternative to binary trie lookup which I described in previous
paragraph. Similarities and differences with your regex design might
also help us understan

Re: GitHub for Windows

2012-06-04 Thread Steven Schveighoffer
On Sat, 02 Jun 2012 16:30:07 -0400, Nick Sabalausky  
 wrote:


Ouch. I haven't had virus problems on my XP system (knock on wood...),  
but

my sister's had a lot of virus trouble on her Win7 machine (and guess who
had to fix the fucking thing every time...) Of course, my dad had a lot  
of

virus trouble on his old XP machne (and again, guess who got to fix the
goddamn thing), but then again, he's an idiot and does all sorts of  
stupid
shit like click on ads, and give the advertiser pages his phone number  
when

they ask for it, and doing all that *despite* noticing that it all seemed
fishy, and god knows what else that he *hasn't* told me about. Colossal
fucking moron.


Hehe, I think all of us here have similar stories.

My in-laws have vista, and after I had to reinstall their computer due to  
malware messing up some internal microsoft services, I told them either  
they find someone else to help them with the computer, or agree to be  
non-admin users on their system.  Now only I have admin privileges, and  
things have gone much smoother since then.


Unfortunately, malware can still fuck up your IE profile.

-Steve


Re: Increment / Decrement Operator Behavior

2012-06-04 Thread simendsjo

On Mon, 04 Jun 2012 20:36:14 +0200, Xinok  wrote:

The increment and decrement operators are highly dependent on operator  
precedence and associativity. If the actions are performed in a  
different order than the developer presumed, it could cause unexpected  
behavior.


I had a simple idea to change the behavior of this operator. It works  
for the postfix operators but not prefix. Take the following code:


size_t i = 5;
writeln(i--, i--, i--);

As of now, this writes "543". With my idea, instead it would write,  
"555". Under the hood, the compiler would rewrite the code as:


size_t i = 5;
writeln(i, i, i);
--i;
--i;
--i;

It decrements the variable after the current statement. While not the  
norm, this behavior is at least predictable. For non-static variables,  
such as array elements, the compiler could store a temporary reference  
to the variable so it can decrement it afterwards.


I'm not actually proposing we actually make this change. I simply  
thought it was a nifty idea worth sharing.


If I ever saw a construct like that, I would certainly test how that  
works, then rewrite it.
I wouldn't find it natural with the new behavior either. I would expect  
"543" or "345".
How often do you come across code like that? I think it's an anti-pattern,  
and shouldn't be encouraged even if it was easier to understand.


Re: AST Macros?

2012-06-04 Thread Jacob Carlborg

On 2012-06-04 10:03, Don Clugston wrote:


AST macros were discussed informally on the day after the conference,
and it quickly became clear that the proposed ones were nowhere near
powerful enough. Since that time nobody has come up with another
proposal, as far as I know.


I think others have suggested doing something similar like Nemerle, 
Scala or Nimrod.


--
/Jacob Carlborg


Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 12:15:33 UTC, Denis Shelomovskij wrote:

04.06.2012 13:46, Dmitry Olshansky написал:

enum keywords = [
"abstract",
"alias",
"align",
//...  all of them, except @ ones
"__traits"
];


A nitpick: constant arrays should be defined as `immutable` 
instead of `enum`. `enum` means every time `keywords` used a 
new mutable array is dynamically allocated:

---
auto x = keywords;
auto y = keywords;
assert(x !is y); // passes
x[0] = ""; // legal, its mutable
---


Well, in my case they were only used for mixin code generation 
(IIRC). Dmitry obviously has a different situation.


Re: Making generalized Trie type in D

2012-06-04 Thread Roman D. Boiko

On Monday, 4 June 2012 at 15:35:31 UTC, Dmitry Olshansky wrote:

On 04.06.2012 18:19, Andrei Alexandrescu wrote:

On 6/4/12 4:46 AM, Dmitry Olshansky wrote:
Cross-posting from my GSOC list as I think we had a lack of D 
rox posts

lately :)

[snip]

I think it would be great if you converted this post into an 
article.


Andrei



Sounds good, will do once I fix few issues that were mentioned 
(bit-packing, GC types etc.)


Would be interesting to see some more examples, along with 
rationale/motivation for various aspects of your API, and 
possible usage scenarios.


Tries are fundamental (both data structures and respective 
algorithms) for lookup problems, in the same way as arrays are 
fundamental for indexed access.


For example, they have several advantages over hash tables. Hash 
calculation requires const * key.length operations, which is 
equivalent to the number of comparisons needed for trie lookup. 
But hash tables may be less space efficient, or lead to hash 
collisions which increase lookup time. Also, it is possible to 
create persistent (immutable) tries with efficient (log N) 
inserting / deleting (this scenario is very important for my DCT 
project). Immutable hash tables would require 0(N) copying for 
each insert / delete.


It is difficult to create a good API for fundamental data 
structures, because various use cases would motivate different 
trade-offs. The same is true for implementation. This is why I 
like your decision to introduce policies for configuration. 
Rationale and use cases should help to analyze design of your API 
and implementation, thus you will get better community feedback :)


Below are some notes related to my DCT use cases.

Your examples deal with lookup by the whole word (first/last 
characters and length are needed). Are your API and 
implementation adaptable for character-by-character trie lookup?


Will compile-time generation of lookup code based on tries be 
supported? Example which is currently in DCT (first implemented 
by Brian Schott in his Dscanner project) uses switch statements 
(which means lookup linear in number of possible characters at 
each position). A trivial improvement might be using if 
statements and binary lookup. (E.g., if there are 26 possible 
characters used at some position, use only 5 comparisons, not 26).


I wanted to analyse your regex implementation, but that's not an 
easy task and requires a lot of effort... It looks like the most 
promising alternative to binary trie lookup which I described in 
previous paragraph. Similarities and differences with your regex 
design might also help us understand tries better.


Increment / Decrement Operator Behavior

2012-06-04 Thread Xinok
The increment and decrement operators are highly dependent on 
operator precedence and associativity. If the actions are 
performed in a different order than the developer presumed, it 
could cause unexpected behavior.


I had a simple idea to change the behavior of this operator. It 
works for the postfix operators but not prefix. Take the 
following code:


size_t i = 5;
writeln(i--, i--, i--);

As of now, this writes "543". With my idea, instead it would 
write, "555". Under the hood, the compiler would rewrite the code 
as:


size_t i = 5;
writeln(i, i, i);
--i;
--i;
--i;

It decrements the variable after the current statement. While not 
the norm, this behavior is at least predictable. For non-static 
variables, such as array elements, the compiler could store a 
temporary reference to the variable so it can decrement it 
afterwards.


I'm not actually proposing we actually make this change. I simply 
thought it was a nifty idea worth sharing.


Re: More synchronized ideas

2012-06-04 Thread Steven Schveighoffer
On Mon, 04 Jun 2012 07:17:45 -0400, Michel Fortin  
 wrote:


After trying to make sense of the thread "synchronized  
(this[.classinfo]) in druntime and phobos", I had to write my opinion on  
all this somewhere that wouldn't be instantly lost in a bazilion of  
posts. It turned out into something quite elaborate.





I like this.  But it needs a lot of work.

A few comments:

1. This does not handle shared *at all*.  Presumably, there is no reason  
to lock unshared data, so this has to be handled somewhere.  If you say  
"synchronized implies shared", well, then how do you have a shared int  
inside an unshared class?  My instinct is that all the methods that need  
to used synchronized need to be declared shared (meaning the whole class  
data is shared).  But that sucks, because what if you have a thread-local  
instance?


I have an idea to solve this.  Since the mutexes are implicit, we can  
declare space for them, but only allocate them when the class instance is  
shared (allocated on construction).  Then when synchronized goes to lock  
them, if they are null, do nothing.


But what if some data is not marked synchronized?

I can see why Bartosz had such trouble creating a sharing system in a  
simple manner...


2. As far as determining a mutex to protect multiple items of data, what  
about:


synchronized(symbolname) int x, int y;

or

synchronized(symbolname)
{
   int x;
   int y;
}

where you cannot do synchronized(x) or synchronized(y), and cannot read or  
write x or y without doing synchronized(symbolname).


-Steve


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Denis Shelomovskij

04.06.2012 21:57, John Chapman написал:

On Monday, 4 June 2012 at 17:52:10 UTC, Mehrdad wrote:

On Monday, 4 June 2012 at 16:37:45 UTC, Kagamin wrote:

On Monday, 4 June 2012 at 13:58:22 UTC, Mehrdad wrote:

(@Andrej Mitrovic, mainly)

So I was using your library :) and this happened:

http://stackoverflow.com/questions/10878586

Any ideas?


You need to specify subsystem 4.0 or something like that, google for
.def file docs on dmc site.


I don't think that's the issue -- the subsystems for the C code
is Console.


Yes, but it defaults to 3.10 - you need to specify 4 or higher.

-L/SUBSYSTEM:Console:4


How can we all answer the same question in the same time without 
synchronization? Looks like it's a magical time...


--
Денис В. Шеломовский
Denis V. Shelomovskij


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread John Chapman

On Monday, 4 June 2012 at 17:52:10 UTC, Mehrdad wrote:

On Monday, 4 June 2012 at 16:37:45 UTC, Kagamin wrote:

On Monday, 4 June 2012 at 13:58:22 UTC, Mehrdad wrote:

(@Andrej Mitrovic, mainly)

So I was using your library :) and this happened:

http://stackoverflow.com/questions/10878586

Any ideas?


You need to specify subsystem 4.0 or something like that, 
google for .def file docs on dmc site.


I don't think that's the issue -- the subsystems for the C code
is Console.


Yes, but it defaults to 3.10 - you need to specify 4 or higher.

-L/SUBSYSTEM:Console:4


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Denis Shelomovskij

04.06.2012 17:58, Mehrdad написал:

(@Andrej Mitrovic, mainly)

So I was using your library :) and this happened:

http://stackoverflow.com/questions/10878586

Any ideas?


Answer posted to stackoverflow.

--
Денис В. Шеломовский
Denis V. Shelomovskij


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Mehrdad

On Monday, 4 June 2012 at 17:52:10 UTC, Mehrdad wrote:

On Monday, 4 June 2012 at 16:37:45 UTC, Kagamin wrote:

On Monday, 4 June 2012 at 13:58:22 UTC, Mehrdad wrote:

(@Andrej Mitrovic, mainly)

So I was using your library :) and this happened:

http://stackoverflow.com/questions/10878586

Any ideas?


You need to specify subsystem 4.0 or something like that, 
google for .def file docs on dmc site.


I don't think that's the issue -- the subsystems for the C code
is Console.


Oooh, wait, that's different... I'll try that, thanks.


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Mehrdad

On Monday, 4 June 2012 at 16:20:00 UTC, Andrej Mitrovic wrote:

On 6/4/12, Mehrdad  wrote:

(@Andrej Mitrovic, mainly)

So I was using your library :) and this happened:

http://stackoverflow.com/questions/10878586

Any ideas?



I'm not sure where it went wrong for you. I've made a test 
folder
inside the samples dir, pasted your snippet to test.d and 
compiled it

like so:

D:\dev\projects\DWinProgramming\Samples\Extra\Test>..\..\..\build.exe 
%cd%


Works for me: http://imgur.com/IIw82


what.

I guess I'll just switch to an official version of the compiler?
(What version are you on?)

(Thanks for replying!)


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Mehrdad

On Monday, 4 June 2012 at 16:37:45 UTC, Kagamin wrote:

On Monday, 4 June 2012 at 13:58:22 UTC, Mehrdad wrote:

(@Andrej Mitrovic, mainly)

So I was using your library :) and this happened:

http://stackoverflow.com/questions/10878586

Any ideas?


You need to specify subsystem 4.0 or something like that, 
google for .def file docs on dmc site.


I don't think that's the issue -- the subsystems for the C code
is Console.


Re: More synchronized ideas

2012-06-04 Thread Michel Fortin

On 2012-06-04 13:15:57 +, "Jason House"  said:

The example about current D seems a bit lacking. You change behavior 
because it's easy to code a different way. If you really want to use 
synchronized classes, then you should have two of them. The map can be 
in one synchronized class and the counters can be in another. the main 
class in the example would simply call methods on the synchronized 
objects.


Valid comment. I thought about creating yet another example 
illustrating that, but I gave up when realizing the silliness of it. I 
mean, yes you can make it work, but at the price of writing a lot of 
boilerplate code just for forwarding things around.


Here's a modified implementation of that dictionary class, wrapping the 
translations AA and the two counters in two distinct classes:


class Dictionary
{
private SynchronizedStringMap translations;
private SynchronizedCounters counters;

void addWord(string word, string foreignWord)
{
synchronized(translations)
{
translations[word] = foreignWord;

counters.addUnconfirmed();
}
}


bool confirmWord(string word, string foreignWord)
{
synchronized(translations)
{
string * found = word in translation;
if (!found)
return false;
if (*found != foreignWord)
return false;
}
synchronized(counters)
{
counters.removeUnconfirmed();
counters.addConfirmed();
}
globalNotifyWordConfirmed(word, foreignWord);
return true;
}
}

That done, I still have to implement two other classes: 
SynchronizedStringMap which needs to implement the '[]' and 'in' 
operators, and counters which needs to implement addUnconfirmed, and 
SynchronizedCounters which needs to implement addUnconfirmed, 
removeUnconfirmed, and addConfirmed. All those functions just 
forwarding calls to the wrapped variable(s). And all this for a simple 
toy example.


That said, the other point is that it's too easy to shoot yourself in 
the foot using implicit synchronization. It's easy to forget you still 
have a mutex locked when adding the call to globalNotifyWordConfirmed, 
especially if you add this line of code later in the development 
process and you have forgotten about the "synchronized" keyword far 
away at the very top of the class declaration. And once you have a 
deadlock and you've identified the culprit, to fix the bug you need to 
split everything that needs to be synchronized into a separate class, a 
tiresome and bug-prone process, just because you can't opt-out of the 
implicit synchronization. Call me nuts if you want, but I think this is 
awful.



That being said, I've never used synchronized classes in my 
multithreaded D1/D2 code. I used Tango's Mutexes and Conditions. 
They're more flexible.


I'd say it's a good choice. How does it work with shared variables in 
D2, or are you just ignoring the type system?



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky

On 04.06.2012 16:15, Denis Shelomovskij wrote:

04.06.2012 13:46, Dmitry Olshansky написал:

enum keywords = [
"abstract",
"alias",
"align",
//... all of them, except @ ones
"__traits"
];


A nitpick: constant arrays should be defined as `immutable` instead of
`enum`. `enum` means every time `keywords` used a new mutable array is
dynamically allocated:
---
auto x = keywords;
auto y = keywords;
assert(x !is y); // passes
x[0] = ""; // legal, its mutable
---



Thanks, the mutable part is then a consequence of being another rvalue.
Obviously table should have been immutable.


--
Dmitry Olshansky


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Kagamin

On Monday, 4 June 2012 at 13:58:22 UTC, Mehrdad wrote:

Any ideas?


Optlink says HI!


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Kagamin

On Monday, 4 June 2012 at 13:58:22 UTC, Mehrdad wrote:

(@Andrej Mitrovic, mainly)

So I was using your library :) and this happened:

http://stackoverflow.com/questions/10878586

Any ideas?


You need to specify subsystem 4.0 or something like that, google 
for .def file docs on dmc site.


Re: [ offtopic ] About the "C++ Compilation Speed" article on DrDobbs

2012-06-04 Thread SomeDude
On Monday, 4 June 2012 at 16:03:34 UTC, Victor Vicente de 
Carvalho wrote:

Walter:

On this publication from 2010 
http://www.drdobbs.com/blogs/cpp/228701711 you gave some 
insight and promised a follow up on what design decisions are 
important to make a compiler fast. Did you happen to have 
written it?


They are implemented in the D compiler as part of the language 
design (modules, no preprocessor, avoidance of ambiguities), and 
I believe the C++ compiler is faster than most others, although 
the language doesn"t allow many optimizations, due to the 
preprocessor and the numerous syntaxic ambiguities.


Re: Phobos pull 613

2012-06-04 Thread Brad Anderson
On Sun, Jun 3, 2012 at 4:49 PM, Brad Anderson  wrote:

> This pull [1] adds std.net.curl to the Windows makefiles (it has been
> missing for the last couple of releases). It'd be great if this could be
> merged before the next release. Denis figured out what was preventing the
> autotester from working (this is what prevented my pulls to fix this from
> getting applied) so it should be good to go now.
>
> I had a pull for the Windows installer which has already been applied
> which automatically downloads the curl binaries. That change, combined with
> Denis' pull, should make curl support for Windows as simple as it can be
> (as long as you use the installer) while meeting Walter's desire to not
> bundle curl binaries with phobos.
>
> Could a phobos committer please review this pull and merge it before 2.060
> is released?  Then we won't get the recurring question of why std.net.curl
> isn't working on Windows.
>
> [1] 
> https://github.com/D-**Programming-Language/phobos/**pull/613
>
> Regards,
> Brad Anderson
>

Thanks, Jonathan. You are a gentleman and a scholar.


Re: Same _exact_ code in C and D give different results — why?

2012-06-04 Thread Andrej Mitrovic
On 6/4/12, Mehrdad  wrote:
> (@Andrej Mitrovic, mainly)
>
> So I was using your library :) and this happened:
>
> http://stackoverflow.com/questions/10878586
>
> Any ideas?
>

I'm not sure where it went wrong for you. I've made a test folder
inside the samples dir, pasted your snippet to test.d and compiled it
like so:

> D:\dev\projects\DWinProgramming\Samples\Extra\Test>..\..\..\build.exe %cd%

Works for me: http://imgur.com/IIw82


[ offtopic ] About the "C++ Compilation Speed" article on DrDobbs

2012-06-04 Thread Victor Vicente de Carvalho

Walter:

On this publication from 2010 
http://www.drdobbs.com/blogs/cpp/228701711 you gave some insight 
and promised a follow up on what design decisions are important 
to make a compiler fast. Did you happen to have written it?




Re: Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky

On 04.06.2012 18:19, Andrei Alexandrescu wrote:

On 6/4/12 4:46 AM, Dmitry Olshansky wrote:

Cross-posting from my GSOC list as I think we had a lack of D rox posts
lately :)

[snip]

I think it would be great if you converted this post into an article.

Andrei



Sounds good, will do once I fix few issues that were mentioned 
(bit-packing, GC types etc.)


--
Dmitry Olshansky


Re: [Proposal] Additional operator overloadings for multidimentional indexing and slicing

2012-06-04 Thread bearophile

Don Clugston:

You mean like the old opStar() (which meant deref), or like 
opBinary("+") ?





I meant the even older ones :-)

Bye,
barophile


Re: [Proposal] Additional operator overloadings for multidimentional indexing and slicing

2012-06-04 Thread Don Clugston

On 04/06/12 15:38, bearophile wrote:

David Nadlinger:


Actually, I'd say its the other way round – opDollar rather
corresponds to opDoubleEqualSign, as it simply describes the character
used.


I agree. It's the opposite of the semantic names of the original
operator overloading set.


You mean like the old opStar() (which meant deref), or like opBinary("+") ?




Re: Making generalized Trie type in D

2012-06-04 Thread Andrei Alexandrescu

On 6/4/12 4:46 AM, Dmitry Olshansky wrote:

Cross-posting from my GSOC list as I think we had a lack of D rox posts
lately :)

[snip]

I think it would be great if you converted this post into an article.

Andrei



Re: [Proposal] Additional operator overloadings for multidimentional indexing and slicing

2012-06-04 Thread bearophile

David Nadlinger:

Actually, I'd say its the other way round – opDollar rather 
corresponds to opDoubleEqualSign, as it simply describes the 
character used.


I agree. It's the opposite of the semantic names of the original
operator overloading set.

Bye,
bearophile


Re: More synchronized ideas

2012-06-04 Thread Jason House

On Monday, 4 June 2012 at 11:17:45 UTC, Michel Fortin wrote:
After trying to make sense of the thread "synchronized 
(this[.classinfo]) in druntime and phobos", I had to write my 
opinion on all this somewhere that wouldn't be instantly lost 
in a bazilion of posts. It turned out into something quite 
elaborate.





The example about current D seems a bit lacking. You change 
behavior because it's easy to code a different way. If you really 
want to use synchronized classes, then you should have two of 
them. The map can be in one synchronized class and the counters 
can be in another. the main class in the example would simply 
call methods on the synchronized objects.


That being said, I've never used synchronized classes in my 
multithreaded D1/D2 code. I used Tango's Mutexes and Conditions. 
They're more flexible. That being said, I think the use of a 
synchronized class for your counters is perfectly reasonable and 
would simplify validating the code for correctness.


Re: Making generalized Trie type in D

2012-06-04 Thread Denis Shelomovskij

04.06.2012 13:46, Dmitry Olshansky написал:

enum keywords = [
 "abstract",
 "alias",
 "align",
 //...  all of them, except @ ones
 "__traits"
 ];


A nitpick: constant arrays should be defined as `immutable` instead of 
`enum`. `enum` means every time `keywords` used a new mutable array is 
dynamically allocated:

---
auto x = keywords;
auto y = keywords;
assert(x !is y); // passes
x[0] = ""; // legal, its mutable
---

--
Денис В. Шеломовский
Denis V. Shelomovskij


More synchronized ideas

2012-06-04 Thread Michel Fortin
After trying to make sense of the thread "synchronized 
(this[.classinfo]) in druntime and phobos", I had to write my opinion 
on all this somewhere that wouldn't be instantly lost in a bazilion of 
posts. It turned out into something quite elaborate.




--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: [Proposal] Additional operator overloadings for multidimentional indexing and slicing

2012-06-04 Thread John Chapman

On Monday, 4 June 2012 at 10:00:20 UTC, Dmitry Olshansky wrote:

On 04.06.2012 13:57, Don Clugston wrote:

On 03/06/12 19:31, tn wrote:

On Friday, 1 June 2012 at 01:57:36 UTC, kenji hara wrote:

I'd like to propose a new language feature to D community.
...
This patch is an additional enhancement of opDollar (issue 
3474 and

#442).



Sounds awesome. However, the name opDollar should be changed 
to

something like opSize, opLength, opEnd or almost anything else
than the current name.


opDollar is a pretty awful name but nobody could come up with 
something
that is less awful. At least it is not confusing. Everybody 
instantly

knows what it does.
For built-in arrays $ is the length and the size, but that 
isn't

generally true.

Wish we had a better name, but opLength isn't it, and nor is 
opSize.
opEnd might be the best of those three, but it kinda sounds 
like

something to do with ranges.


opEndSymbol ? It's no dollar but it's clear what it overloads.


Maybe opUpperBound or opUBound?


Re: [Proposal] Additional operator overloadings for multidimentional indexing and slicing

2012-06-04 Thread David Nadlinger

On Monday, 4 June 2012 at 10:07:38 UTC, Jonathan M Davis wrote:
TDPL already lists opDollar, and it's overloading the $ 
operator, so I would
dispute that a better name _could_ exist. That would be like 
saying that
opEquals would be better if it were renamed to 
opDoubleEqualSign.


Actually, I'd say its the other way round – opDollar rather 
corresponds to opDoubleEqualSign, as it simply describes the 
character used. But I agree that there isn't really a good reason 
to change opDollar at this point.


David


Re: Exception/Error division in D

2012-06-04 Thread Don Clugston

On 01/06/12 22:35, Walter Bright wrote:

On 6/1/2012 11:14 AM, deadalnix wrote:

We are talking about runing scope statement and finally when unwiding
the stack,
not trying to continue the execution of the program.


Which will be running arbitrary code not anticipated by the assert
failure, and code that is highly unlikely to be desirable for shutdown.


Sorry, Walter, that's complete bollocks.

try {
   assert(x == 2);
} catch(AssertException e)
{
   foo();   
}

is exactly equivalent to:

version (release)
{}
else
{
   if (x!=2) foo();
}

Bad practice, sure. But it's not running arbitrary, unanticipated code.



Re: runtime hook for Crash on Error

2012-06-04 Thread Don Clugston

On 01/06/12 12:26, Walter Bright wrote:

On 6/1/2012 1:48 AM, Dmitry Olshansky wrote:

On 01.06.2012 5:16, Walter Bright wrote:

On 5/31/2012 3:22 AM, Dmitry Olshansky wrote:

On 31.05.2012 13:06, deadalnix wrote:

This is called failing gracefully. And this highly recommended, and
you
KNOW that the system will fail at some point.


Exactly. + The point I tried to argue but it was apparently lost:
doing stack unwinding and cleanup on most Errors (some Errors like
stack
overflow might not recoverable) is the best thing to do.


This is all based on the assumption that the program is still in a valid
state after an assert fail, and so any code executed after that and the
data it relies on is in a workable state.


> This is a completely wrong assumption.

To be frank a "completely wrong assumption" is flat-out exaggeration.
The only
problem that can make it "completely wrong" is memory corruption.
Others just
depend on specifics of system, e.g. wrong arithmetic in medical
software ==
critical, arithmetic bug in "refracted light color component" in say
3-D game is
no problem, just log it and recover. Or better - save game and then crash
gracefully.


Except that you do not know why the arithmetic turned out wrong - it
could be the result of memory corruption.


This argument seems to be:

1. There exist cases where you cannot know why the assert failed.
2. Therefore you never know why an assert failed.
3. Therefore it is not safe to unwind the stack from a nothrow function.

Spot the fallacies.

The fallacy in moving from 2 to 3 is more serious than the one from 1 to 
2: this argument is not in any way dependent on the assert occuring in a 
nothrow function. Rather, it's an argument for not having AssertError at 
all.






Re: [Proposal] Additional operator overloadings for multidimentional indexing and slicing

2012-06-04 Thread Jonathan M Davis
On Monday, June 04, 2012 14:00:18 Dmitry Olshansky wrote:
> On 04.06.2012 13:57, Don Clugston wrote:
> > On 03/06/12 19:31, tn wrote:
> >> On Friday, 1 June 2012 at 01:57:36 UTC, kenji hara wrote:
> >>> I'd like to propose a new language feature to D community.
> >>> ...
> >>> This patch is an additional enhancement of opDollar (issue 3474 and
> >>> #442).
> >> 
> >> Sounds awesome. However, the name opDollar should be changed to
> >> something like opSize, opLength, opEnd or almost anything else
> >> than the current name.
> > 
> > opDollar is a pretty awful name but nobody could come up with something
> > that is less awful. At least it is not confusing. Everybody instantly
> > knows what it does.
> > For built-in arrays $ is the length and the size, but that isn't
> > generally true.
> > 
> > Wish we had a better name, but opLength isn't it, and nor is opSize.
> > opEnd might be the best of those three, but it kinda sounds like
> > something to do with ranges.
> 
> opEndSymbol ? It's no dollar but it's clear what it overloads.

TDPL already lists opDollar, and it's overloading the $ operator, so I would 
dispute that a better name _could_ exist. That would be like saying that 
opEquals would be better if it were renamed to opDoubleEqualSign.

- Jonathan M Davis


Re: [Proposal] Additional operator overloadings for multidimentional indexing and slicing

2012-06-04 Thread Dmitry Olshansky

On 04.06.2012 13:57, Don Clugston wrote:

On 03/06/12 19:31, tn wrote:

On Friday, 1 June 2012 at 01:57:36 UTC, kenji hara wrote:

I'd like to propose a new language feature to D community.
...
This patch is an additional enhancement of opDollar (issue 3474 and
#442).



Sounds awesome. However, the name opDollar should be changed to
something like opSize, opLength, opEnd or almost anything else
than the current name.


opDollar is a pretty awful name but nobody could come up with something
that is less awful. At least it is not confusing. Everybody instantly
knows what it does.
For built-in arrays $ is the length and the size, but that isn't
generally true.

Wish we had a better name, but opLength isn't it, and nor is opSize.
opEnd might be the best of those three, but it kinda sounds like
something to do with ranges.


opEndSymbol ? It's no dollar but it's clear what it overloads.

--
Dmitry Olshansky


Re: [Proposal] Additional operator overloadings for multidimentional indexing and slicing

2012-06-04 Thread Don Clugston

On 03/06/12 19:31, tn wrote:

On Friday, 1 June 2012 at 01:57:36 UTC, kenji hara wrote:

I'd like to propose a new language feature to D community.
...
This patch is an additional enhancement of opDollar (issue 3474 and
#442).



Sounds awesome. However, the name opDollar should be changed to
something like opSize, opLength, opEnd or almost anything else
than the current name.


opDollar is a pretty awful name but nobody could come up with something 
that is less awful. At least it is not confusing. Everybody instantly 
knows what it does.
For built-in arrays $ is the length and the size, but that isn't 
generally true.


Wish we had a better name, but opLength isn't it, and nor is opSize.
opEnd might be the best of those three, but it kinda sounds like 
something to do with ranges.


Making generalized Trie type in D

2012-06-04 Thread Dmitry Olshansky
Cross-posting from my GSOC list as I think we had a lack of D rox posts 
lately :)


I rarely am surprised by generic code flexibility and power these days.
But once I fleshed out last bits of generalized Trie I was amazed.

In short: it allows something like "make a multistage integer lookup 
table using these x upper bits, these y bits in the middle and those 
last z bits as final offset" as one-liner. Better yet strings and other 
keys are just as easy, and even better it's up to user to allow custom 
ways of getting X bits for any key.
And while I writing this bit-packing is on it's way to get integrated 
thus obtaining ultra-small tables (we can pack even index entries since 
we know how much bits they take exactly).


And last but no least - it's easy to strap Trie on top of some other 
container (say arrays, sets, hashtable etc.).  (currently it doesn't 
play nice with GC-ed objects, I'll fix it later)
Say book catalog that goes by first latter (or two) as Trie stage then 
as final slot uses sorted array. The advantages  of such schemes is that 
re-sorting huge arrays  can get costly, moreover insertion can 
reallocate and that might even run out of memory while duping it. On the 
other hand smaller array keep sorting bounded at acceptable level.


(In fact hashtable is a degenerate case of 1-level Trie on top of e.g. 
List, just not space-optimized and with a weird  index transform).


To give you a taste of this power, consider a simple example - keyword 
recognizer. I'm showing two version first just checks if it's a keyword. 
Second one does catalogs it to specific kind.

(sample material taken from DCT project, thanks goes to Roman Boiko)

enum keywords = [
"abstract",
"alias",
"align",
//...  all of them, except @ ones
"__traits"
];

//here is bare minumum set type usable for Trie
 struct SmallMap(size_t N, V, K)
{
void insert(Tuple!(V, K) t){ _set.insert(t); }

V opBinaryRight(string op, T)(T key)
if(op == "in")
{
auto idx = map!"a[1]"(_set.items[]).countUntil(key);
return idx < 0 ? V.init : _set.items[idx][0];
}
private:
SmallSet!(N, Tuple!(V, K)) _set;
}
//define how we get bits for our index
   size_t useLength(T)(T[] arr)
{
return arr.length > 63 ? 0 : arr.length; //need max length, 64 
- 6bits

}

template useLastItem(T)
{
size_t entity(in T[] arr){ return arr[$-1]; }
enum bitSize = 8*T.sizeof;
}
//... useItemAt is similar


//now the usage:
auto keyTrie = Trie!(SetAsSlot!(SmallSet!(2,string))
 , string
 , assumeSize!(6, useLength)
 , useItemAt!(0, char)
 , useLastItem!(char)
)(keywords);

foreach(key; keywords)
//opIndex gives us a slot, here slot is set so test it with 'in'
assert( key in keyTrie[key]);


And that's it. Importantly there is not a single bit of hardcoding here:
- we choose to make Trie of sets by passing SetAsSlot for value 
type (simple Trie may just use say bool)

- key type is string oviously
- assumeSize takes a user define functor and "trusts" user that it 
is result indeed fits into X bit wide integer
- the the table as constructed goes by levels: by length -> by 
first char -> by last char -> 


As you can observe "alias" and "align" do collide after 3 levels passed, 
thus we go for fixed 2-slot sets at the end to resolve it and a few others.
Of course there are better ways to handle level that would prevent this 
"collision" and reduce size, it's just to show how Trie on top of Set 
works. (not to mention all keywords are ASCII alphas, hinting on better 
compression in say 6bits)


And here is how it can work with Maps as slot (the above was mapping to 
booleans in fact):


auto keywordsMap = [
"abstract" : TokenKind.Abstract,
"alias" : TokenKind.Alias,
"align" : TokenKind.Align,
//... all others, cataloged per their kind
"__thread" : TokenKind._Thread,
"__traits" : TokenKind._Traits,
];

//now the usage:
auto keyTrie = Trie!(SetAsMap!(SmallMap!(2, TokenKind, string), 
TokenKind, string)
//duplication is because not every Map type Key-Value can be derived (we 
have no standard protocol for AAs)

 , string
 , assumeSize!(6, useLength)
 , useItemAt!(0, char)
 , useLastItem!(char)
)(keywordsMap);

 foreach(k,v; keywordsMap)
assert((k in keyTrie2[k]) == v);

And the only big change is the data type :

struct SmallMap(size_t N, V, K)
{
void insert(Tuple!(V, K) t){ _set.insert(t); }

V opBinaryRight(string op, T)(T key)
if(op == "in")
{
auto idx = map!"a[1]"(_set.items[])

Re: synchronized (this[.classinfo]) in druntime and phobos

2012-06-04 Thread deadalnix

Le 04/06/2012 10:56, Jonathan M Davis a écrit :

On Monday, June 04, 2012 10:51:08 mta`chrono wrote:

Am 31.05.2012 17:05, schrieb Regan Heath:

.. but, hang on, can a thread actually lock a and then b?  If 'a' cannot
participate in a synchronized statement (which it can't under this
proposal) then no, there is no way to lock 'a' except by calling a
member.  So, provided 'a' does not have a member which locks 'b' - were
deadlock safe!

So.. problem solved; by preventing external/public lock/unlock on a
synchronized class.  (I think the proposal should enforce this
restriction; synchronized classes cannot define __lock/__unlock).

R


I think it doesn't matter whether you expose your mointor / locking /
unlocking to the public or not. You can always unhappily create
deadlocks that are hard to debug between tons of spaghetti code.


You can always create deadlocks, but if there's something which gives you
little benefit but significantly increases the risk of deadlocks (e.g. making it
easy to lock on a synchronized class' internal mutex via a synchronized
block), then it's valuable to make it illegal. Because while it won't prevent
all deadlocking, it _does_ eliminate one case where it's overly easy to
deadlock.

- Jonathan M Davis


At least illegal by default. The programmer may enable it by 
him/herself, but not fall in the trap inadvertently.


Re: AST Macros?

2012-06-04 Thread deadalnix

Le 01/06/2012 21:37, Jacob Carlborg a écrit :

On 2012-06-01 17:47, Gor Gyolchanyan wrote:


Where can I read more about Bartosz's race-free type system and if there
are some specific ideas already, AST macros for D as well?


AST macros have been mentioned in the newsgroups several times. There
was a talk at the first D conference mentioning AST macros. This was
before D2.

http://d.puremagic.com/conference2007/speakers.html

It's the talk by Walter Bright and Andrei Alexandrescu. It's probably in
the second part.



I have seen the presentation's slides a long time ago, but didn't knew 
it was recorded on video ! Thank you very much !


Re: synchronized (this[.classinfo]) in druntime and phobos

2012-06-04 Thread Jonathan M Davis
On Monday, June 04, 2012 10:51:08 mta`chrono wrote:
> Am 31.05.2012 17:05, schrieb Regan Heath:
> > .. but, hang on, can a thread actually lock a and then b?  If 'a' cannot
> > participate in a synchronized statement (which it can't under this
> > proposal) then no, there is no way to lock 'a' except by calling a
> > member.  So, provided 'a' does not have a member which locks 'b' - were
> > deadlock safe!
> > 
> > So.. problem solved; by preventing external/public lock/unlock on a
> > synchronized class.  (I think the proposal should enforce this
> > restriction; synchronized classes cannot define __lock/__unlock).
> > 
> > R
> 
> I think it doesn't matter whether you expose your mointor / locking /
> unlocking to the public or not. You can always unhappily create
> deadlocks that are hard to debug between tons of spaghetti code.

You can always create deadlocks, but if there's something which gives you 
little benefit but significantly increases the risk of deadlocks (e.g. making 
it 
easy to lock on a synchronized class' internal mutex via a synchronized 
block), then it's valuable to make it illegal. Because while it won't prevent 
all deadlocking, it _does_ eliminate one case where it's overly easy to 
deadlock.

- Jonathan M Davis


Re: synchronized (this[.classinfo]) in druntime and phobos

2012-06-04 Thread mta`chrono
Am 31.05.2012 17:05, schrieb Regan Heath:
> .. but, hang on, can a thread actually lock a and then b?  If 'a' cannot
> participate in a synchronized statement (which it can't under this
> proposal) then no, there is no way to lock 'a' except by calling a
> member.  So, provided 'a' does not have a member which locks 'b' - were
> deadlock safe!
> 
> So.. problem solved; by preventing external/public lock/unlock on a
> synchronized class.  (I think the proposal should enforce this
> restriction; synchronized classes cannot define __lock/__unlock).
> 
> R
> 

I think it doesn't matter whether you expose your mointor / locking /
unlocking to the public or not. You can always unhappily create
deadlocks that are hard to debug between tons of spaghetti code.

shared A a;
shared B b;

void thread1()
{
 synchronized(a)  // locks A
 {
 synchronized(b)  // ... then B
 {
 // . code 
 }
 }
}

void thread2()
{
 synchronized(b) // locks B
 {
 synchronized(a) // ... then A
 {
 // . code 
 }
 }
}


Re: AST Macros?

2012-06-04 Thread Don Clugston

On 01/06/12 21:37, Jacob Carlborg wrote:

On 2012-06-01 17:47, Gor Gyolchanyan wrote:


Where can I read more about Bartosz's race-free type system and if there
are some specific ideas already, AST macros for D as well?


AST macros have been mentioned in the newsgroups several times. There
was a talk at the first D conference mentioning AST macros. This was
before D2.

http://d.puremagic.com/conference2007/speakers.html

It's the talk by Walter Bright and Andrei Alexandrescu. It's probably in
the second part.



AST macros were discussed informally on the day after the conference, 
and it quickly became clear that the proposed ones were nowhere near 
powerful enough. Since that time nobody has come up with another 
proposal, as far as I know.