Re: A few notes on choosing between Go and D for a quick project

2015-03-15 Thread Sativa via Digitalmars-d

On Sunday, 15 March 2015 at 14:15:18 UTC, disme wrote:
do you people really want to see this language be more popular? 
do you? doesnt look like it at all to me, the only few people 
in those 18+ pages that are actually telling you the real 
problems are mostly being ignored for futile python, go and 
rust talk, seriously?


let me go ahead and say that no i dont use D, ive found it 
awhile ago and came back to look at it from time to time which 
puts me in a perfect spot to tell you that the fundamental 
problems for new comers are right here, in those posts:


page 7 - Leandro Motta Barros' post
page 10 - Almighty Bob's first post
page 11 - Almighty Bob's post (again)
page 14 - rumbu's first post
page 17 - Xavier Bigand's post
page XX - many of Chris' posts
(may have missed a few but those are the ones that jumped out 
at me where i really went THIS MAN GETS IT!)


yes, those are fundamental problems FOR A NEW COMER! 90% of the 
posts i see in this thread are a bunch of... i dont even know? 
advanced problems that new comers would have no clue what 
they are about, only those few posts i mentioned are seeing the 
real problems.


this community seems to be filled with really intelligent, 
dedicated people capable of solving some of the hardest 
challenges but you fail to see the tiny little bitty small 
things that are stopping the new comers and instead you worry 
about things that are far beyond their scope people...


i guess ill find out in a few months when i visit the language 
again if those posts have been paid attention to, with that 
said i wonder how many people will reply to this because they 
havent read until the end (tiny little bitty detail slipping by 
again?)


Why do you believe that something has to be dumbed down for your 
sake instead of stepping up and learning learning something that 
will prove to be more powerful in the end?


I don't think D or anything should kowtow to KISS. Of course 
the lazy ignorant masses want this but humanity ultimate 
suffers... and there is enough simple programming languages for 
simple people. Do we really want another Python, Go, Java, perl, 
php, asp, JS, lua, Ruby, Rust... Oh, wait, instead of listing 
hundreds of languages, just look here:


https://en.wikipedia.org/wiki/List_of_programming_languages

Most of those damn languages are essentially just syntactically 
different. Python vs Perl vs php? Who cares, it's all the same 
crap. If all the energy went in to creating a better languages 
with more power(power != complex) instead of just duplicating the 
stone wheel then maybe we can get somewhere?


I really find it odd when someone complains about a feature but 
never uses it. e.g., I don't like /++/, How often do you use 
it? Never, I don't like it!... then whats the problem, if you 
don't use it then how is it getting in the way of your progress? 
Well, because other people use it and I get confused when I see 
it in their code!. So, instead of allowing them to program the 
way they want, you want to control them so you don't have to 
think as hard?


Wouldn't you agree: The best language is the one that gets out 
of the way and lets you do exactly what you need to do in the 
most efficient way(which isn't just about time).






Re: A few notes on choosing between Go and D for a quick project

2015-03-15 Thread Sativa via Digitalmars-d

On Friday, 13 March 2015 at 03:24:44 UTC, Walter Bright wrote:

On 3/12/2015 5:20 PM, Andrei Alexandrescu wrote:

* Golang: simple!


D1 was the simple version of D. People wanted more.

Java was originally sold as, and got a great of adoption 
because, it was a C++ like language with all that annoying 
complexity removed.


There's no doubt about it, people like simple languages. We 
should very much keep that in mind when evaluating proposals 
for new features.


Um, This is wrong. You already have simple languages. People are 
not going to choose D no matter how much you dumb it down. What 
sets D apart is it's advanced features... remove them or stop 
such enhancements and it won't be able to compete with any other 
language.



In fact, the opposite thinking should be true. Add the most 
advanced feature rich set to D and then nothing will be able to 
compete with it. If, on top of that, you don't force someone to 
use them then you have the best of both words(power when you need 
it and simple when you don't).


There's reasons why people by luxury cars. D is like an Cadillac 
and Go is like a volt. If you turn D in a volt then what will 
people buy that like Cadillac's?
(Someone will create a new language trying to make a Cadillac and 
the whole process starts over...)





D + .NET

2015-03-11 Thread Sativa via Digitalmars-d-learn
If I write a business model in D, how hard is it to hook up a 
presentation using something like wpf, win32, or even whatever 
mac does?


Re: D + .NET

2015-03-11 Thread Sativa via Digitalmars-d-learn

On Wednesday, 11 March 2015 at 08:45:15 UTC, Kagamin wrote:

http://wiki.dlang.org/Libraries_and_Frameworks#GUI_Libraries


Can you point out where it says anything about wpf or .NET? I'm 
having trouble finding it. I even searched for .net and wpf but 
still no luck ;/ Maybe you posted the wrong link by accident?


Re: RCArray is unsafe

2015-03-02 Thread Sativa via Digitalmars-d

On Sunday, 1 March 2015 at 15:44:49 UTC, Marc Schütz wrote:
Walter posted an example implementation of a reference counted 
array [1], that utilizes the features introduced in DIP25 [2]. 
Then, in the threads about reference counted objects, several 
people posted examples [3, 4] that broke the suggested 
optimization of eliding `opAddRef()`/`opRelease()` calls in 
certain situations.


A weakness of the same kind affects DIP25, too. The core of the 
problem is borrowing (ref return as in DIP25), combined with 
manual (albeit hidden) memory management. An example to 
illustrate:





1  struct T {
2  void doSomething();
3  }
4  struct S {
5  RCArray!T array;
7  }
8  void main() {
9  auto s = S(RCArray!T([T()])); // s.array's refcount is
10 now 1
11 foo(s, s.array[0]);   // pass by ref

++  s.array[0] = myt; // Would also be invalid
12 }
13 void foo(ref S s, ref T T) {
14 s.array = RCArray!T([]);  // drop the old s.array
15 t.doSomething();  // oops, t is gone
16 }



1. Assignment to RC types is not the same as assignments to 
non-RC types.
2. Allocation of RC types is more than just allocating of non-RC 
types.


Using the above example:

#9: The assignment and allocation does two things:
   a. Decrements the current RC of s.array(here it is null so no 
RC is used)

   b. Increments the ref count of the allocated RCArray by one.

#11: we pass both s and a referent to inside S. This is odd 
behavior and not technically required in this case(just pass S). 
Of course, we can't expect such behavior to creep up in complex 
code.


#14. In this case the behavior is correct. The assignment does 
the following:
   a. Decrements the current RC of s.array(here was 1, so now it 
is 0)
   b. Increments the current RC of the newly allocated RC array 
and assigns it to s.array.


#15. Since T refers to the old s.array, which now has a ref count 
of 0(which we can assume to then be unallocated), line 15 becomes 
a run-time error.


---

How to fix?

It seems that the only way to make it work well is to use sort of 
lazy ref counting.


Here references are created at the start of functions and 
decremented at the end of functions... rather that in place(which 
then causes problems with things that happen after it).


But this can't work as in the ++ I added. Here it would solve 
your problem but not fix the a generalization of it.


---

Is there a solution? Essentially no. Only at the end of the 
program could we possibly have the last function release all RC 
counted arrays. Since it is the end of the program it is the only 
time we can know that no more RC types will be used.


The problem is analogous to the multiple-inheritance problem.

---

Flow analysis can only help so far(it won't help with dynamically 
allocated types that are allocated depending on some random 
factor.


---

Note that since t is pointing to a part of S, technically we 
can't release s until t is released.


So, effectively, S has to be ref counted too! And it's ref count 
must check check T's ref count. If it is non-zero then the it 
can't release itself(or the reference to it's array).


Also, something eventually has to release S(or which is then 
s.array). When? Well, since t depends on S, it is when t is 
released. So we would naturally have to inform t that it should 
too release s when it is released.




So, if that sort of makes sense I'll explain the solution: It 
might be a bit fuzzy but I'll try to make things very clear. [I 
will also sometimes conflate the usage of concrete objects and 
their corresponding abstract types. e.g., T with t. It should 
be clear context which is meant. e.g., If I say T is released, 
I mean that the object with type T was released. When they need 
to be distinguished between I will do so(mainly in the code 
examples]



A *reference counted* type, from here on designated as RCT, is a 
type that only allows itself to be released to the heap when 
nothing depends on it. [Obviously if anything depends on it when 
after it is released will fail to be fulfilled]


If a RCT does not contains any references to any RCT's we will 
call it a free RCT or FRCT. Such types behave in a simple way. 
They can be free'ed completely immediately. Since FRCT's do not 
contain any references to other RCT's all their references are 
simple references that can be free'ed immediately.


We require that if a type contains a reference to a RCT then it 
too is a RCT. [the relation ContainsRCT(T1,T2) == 
ContainsRCT(T2,T1) == Both T1 and T2 are either RCT's or not 
RCT's] This isn't a big deal but without it we end up requiring 
all types to be RCT's. Here we allow some types to be not be 
RCT's.



In code,

class T { }   // a standard non-reference counted type.
  // It can't use any RCT's as references. It is your 
basic D type of class.


rcclass FRCT  // rcclass means reference 

Re: Contradictory justification for status quo

2015-02-28 Thread Sativa via Digitalmars-d
On Friday, 27 February 2015 at 02:58:31 UTC, Andrei Alexandrescu 
wrote:

On 2/26/15 6:17 PM, H. S. Teoh via Digitalmars-d wrote:
On Thu, Feb 26, 2015 at 05:57:53PM -0800, Andrei Alexandrescu 
via Digitalmars-d wrote:

On 2/26/15 5:48 PM, Zach the Mystic wrote:
I sometimes feel so bad for Kenji, who has come up with 
several
reasonable solutions for longstanding problems, *and* 
implemented
them, only to have them be frozen for *years* by indecision 
at the

top.


Yah, we need to be quicker with making decisions, even 
negative. This
requires collaboration from both sides - people shouldn't get 
furious
if their proposal is rejected. Kenji has been incredibly 
gracious

about this.

[...]

I don't think people would be furious if they knew from the 
beginning
that something would be rejected. At least, most reasonable 
people

won't, and I'm assuming that the set of unreasonable people who
contribute major features is rather small (i.e., near 
cardinality 0).


Well yes in theory there's no difference between theory and 
practice etc. What has happened historically (fortunately not 
as much lately) was that statistically most proposals have been 
simply Not Good. Statistically, proposal authors have been 
Positively Convinced that their proposals were of Obviously 
Excellent. (That includes me; statistically most ideas I've 
ever had have been utter crap, but seldom seemed like it in the 
beginning.) This cycle has happened numerous times. We've 
handled that poorly in the past, and we're working on handling 
it better.


What *does* make people furious / disillusioned is when they 
are led to
believe that their work would be accepted, and then after they 
put in
all the effort to implement it, make it mergeable, keep it up 
to date
with the moving target of git HEAD, etc., it then gets 
summarily
dismissed.  Or ignored for months and years, and then suddenly 
shot
down. Or worse, get *merged*, only to be reverted later 
because the
people who didn't bother giving feedback earlier now show up 
and decide
that they don't like the idea after all.  (It's a different 
story if
post-merge rejection happened because it failed in practice -- 
I think
reasonable people would accept that.  But post-merge rejection 
because
of earlier indecision / silence kills morale really quickly. 
Don't

expect to attract major contributors if morale is low.)


Yes, getting back on a decision or promise is a failure of 
leadership. For example, what happened with [$] was 
regrettable. We will do our best to avoid such in the future.


I should add, however, that effort in and by itself does not 
warrant approval per se. Labor is a prerequisite of any good 
accomplishment, but is not all that's needed.


I'm following with interest the discussion My Reference Safety 
System (DIP???). Right now it looks like a lot of work - a 
long opener, subsequent refinements, good discussion. It also 
seems just that - there's work but there's no edge to it yet; 
right now a DIP along those ideas is more likely to be rejected 
than approved. But I certainly hope something good will come 
out of it. What I hope will NOT happen is that people come to 
me with a mediocre proposal going, We've put a lot of Work 
into this. Well?



Andrei


I'm curious if project management(e.g., MS Project) is used to 
optimize and clarify goals for the D language?


If such a project was maintained, anyone could download it and 
see the current state of D.


The main use being the optimization of tasks and display the 
timeline. If something has been sitting around for a year and 
is blocking other tasks then you can easily see that.


It obviously would lot of work to setup such a project. I imagine 
you could write some script to import data from github or 
whatever into the project and possibly vice versa.






Re: @inverse

2015-02-25 Thread Sativa via Digitalmars-d

On Thursday, 26 February 2015 at 00:56:02 UTC, Xinok wrote:

On Wednesday, 25 February 2015 at 21:25:49 UTC, Daniel N wrote:
Just throwing an idea out there... How about using annotations 
to teach the compiler which functions are inverses of 
each-other, in order to facilitate optimizing away certain 
redundant operations even if they are located inside a 
library(i.e. no source).


A little pseudo-code for illustrational purposes, in case my 
above text is incomprehensible:


void inc() pure nothrow @inverse(dec)
void dec() pure nothrow @inverse(inc)

void swap(T)(ref T lhs, ref T rhs) pure nothrow 
@inverse(swap!T)


I like the idea but feel that it's application is too narrow. I 
prefer features which are more general and offer greater 
flexibility. I believe I've read somewhere that some 
[functional] languages define common patterns and equivalent 
substitutions for optimization purposes.


inc(dec(x)) - x
dec(inc(x)) - x
cos(x)^^2 + sin(x)^^2 - 1


Which would exactly be the result of what Daniel is talking about 
except you are adding invariance which is a harder problem yet 
can be taken care of by the programmer(you would never 
intentionally write cos(x)^2 + sin(x)^2 for anything since it is 
equal to 1 and 1 is more efficient to compute).


The problem is one of composition and it is difficult in real 
circumstances since compositions may not be simply ordered.


e.g., what if you have

inc(foo(dec(x))

?

In this case one can't simplify because one doesn't know what foo 
does.


Hence, to do it properly one would have to create a whole 
compositional system. e.g., @linear, @nonlinear, @additive, 
@commutative, etc...


e.g., if we new foo was linear then we could simplify the above 
to foo(x).


...and, as you hinted at, most functions are non-linear and 
therefor will make @inverse nearly useless.



I suppose, though, one might be able to do something like setup 
@inverse functions for actions.


e.g., user clicks on button X. The inverse then is sort of an 
undo of that.


In an undo system one expects every action to be linear(have an 
inverse)... Hence it might be useful in such circumstances.


Re: D GC theory

2015-02-24 Thread Sativa via Digitalmars-d

On Tuesday, 24 February 2015 at 08:39:02 UTC, Kagamin wrote:

On Tuesday, 24 February 2015 at 00:30:43 UTC, ketmar wrote:

On Mon, 23 Feb 2015 21:11:22 +, Sativa wrote:

How hard would it be to modify D's GC to do the following two 
things:


1. Scan the heap in the BG on another thread/cpu for 
compactification.


needs read/write barriers added to generated code. a major 
slowdown for

ALL memory access.


Only modifications of pointers, which introduce new cross-block 
dependencies (so that GC knows to recheck the new dependency). 
Other memory access goes without slowdown.


But this type of thinking is the reason why the current GC is in 
the state it is.


The compiler knows which pointers are free and which ones are 
bound. Bound pointers are pointers that are not assigned freely 
by the user. e.g., a pointer to an array who's address never is 
arbitrarily set by the user is bound. The compiler knows where 
and how the pointer is assigned. Most pointers are this way.


Bound pointers are pointers the GC can easily clean up because it 
knows when and how they are used. In this way, if all pointers of 
a program were bound, the GC can work in the background and never 
pause the state to clean up. (essentially the compiler would need 
to insert special code) most pointers are bound pointers.


Free pointers are more difficult as they can, say, be randomly 
initiated and point to anywhere on the heap and have to be looked 
in a locked way. (to prevent them changing in the middle of some 
GC operation)


But if one distinguishes bound and free pointers(Easily done with 
a bit in the pointers) and has the compiler keep track of when 
free pointers are used(by having a dirty bit when they are 
written to), then one can more easily scan the heap in the 
background.


In fact, one could potentially get away from all synching issues 
by doing the following:


When ever free pointers are used a simple spin lock is used. The 
spin lock checks a flag in the free pointers table that signals 
that a pointer is being changed by the code. When this is true, 
the free pointers table is in a state of flux and can't be relied 
on. In the mean time, the GC can build up information about the 
heap for the bound pointers. It can figure out what needs to be 
changed, setup buffering(which can be done using bits in the 
pointer), etc all in the background because the bound pointers 
are stable and deterministically change.


When the free pointers table's dirty flag is unset it means that 
the free pointers are not changing in the program and the GC can 
lock the table using another flag. When the flag is set the spin 
lock kicks in and pauses the program while the GC is working on 
the free pointers table. (or to be more efficient, the program 
can yield to some other background task code)


By having multiple tables of free pointers one can reduce the 
overhead. The GC looks at on a piece at a time and locks on a 
fraction of the code at any point in time. The compiler can 
distribute the locks vs pages in an optimized way through 
profiling.








D GC theory

2015-02-23 Thread Sativa via Digitalmars-d
How hard would it be to modify D's GC to do the following two 
things:


1. Scan the heap in the BG on another thread/cpu for 
compactification.


2. Move blocks of memory in predictable(timewise) chunks that 
prevents locking the main thread for any period of time.


e.g., in the first step the GC finds some blocks of memory that 
need to be freed/compacted. In the 2nd step it starts 
freeing/compacting it in predictable pieces by limiting the time 
it takes while working.


The point is, that maybe the GC is ran more often but in smaller 
and predictable steps.


That is, the GC should be able to calculate how long it will take 
to free/compact a block. If it takes too long then it simply does 
it in stages.


This way, there is essentially a very predictable and consistent 
cpu usage with the GC running but never any major lag spikes that 
are going to throw real time behavior out the window.


It would seem that such a Feature would be easy to implement by 
modifying the existing GC code to be incremental.


I'd prefer a constant 1-5% cpu usage given to the GC if it didn't 
blow up for no reason. This way, it being very predictable, just 
mean one has to get a slightly faster cpu to compensate or 
optimize the code slightly.


It would be analogous to game programming.

1. We can have the GC steal, say, 1 fps to do it's work...

2. Or we can keep the GC asleep doing nothing until it gets so 
much work it has pause the entire engine for a 1/2 second 
dropping the fps down by half momentarily. This might happen 
every 1 minute or so but still unacceptable for most gamers. 
(assuming 30-60 fps)



I'd prefer the first case.






Re: D GC theory

2015-02-23 Thread Sativa via Digitalmars-d
Basically, I am simply wondering if the GC can throttle itself 
as to reduce the *maximum* time the program has to wait.




Re: D GC theory

2015-02-23 Thread Sativa via Digitalmars-d

On Monday, 23 February 2015 at 22:11:48 UTC, weaselcat wrote:

On Monday, 23 February 2015 at 21:11:23 UTC, Sativa wrote:
How hard would it be to modify D's GC to do the following two 
things:


1. Scan the heap in the BG on another thread/cpu for 
compactification.


2. Move blocks of memory in predictable(timewise) chunks that 
prevents locking the main thread for any period of time.


e.g., in the first step the GC finds some blocks of memory 
that need to be freed/compacted. In the 2nd step it starts 
freeing/compacting it in predictable pieces by limiting the 
time it takes while working.


The point is, that maybe the GC is ran more often but in 
smaller and predictable steps.


That is, the GC should be able to calculate how long it will 
take to free/compact a block. If it takes too long then it 
simply does it in stages.


This way, there is essentially a very predictable and 
consistent cpu usage with the GC running but never any major 
lag spikes that are going to throw real time behavior out the 
window.


It would seem that such a Feature would be easy to implement 
by modifying the existing GC code to be incremental.


I'd prefer a constant 1-5% cpu usage given to the GC if it 
didn't blow up for no reason. This way, it being very 
predictable, just mean one has to get a slightly faster cpu to 
compensate or optimize the code slightly.


Hi,
D's GC actually is predictable. It will not collect unless you 
allocate. You can also disable it and manually collect. I use 
it this way and essentially use it as a smart freelist.



That isn't the problem. The problem is that when it does collect, 
the time it takes is unpredictable. It could take 1us or 10m. If 
there is a cap on how long the GC can run at any particular time 
interval, then it it's time complexity is simple, predicable, and 
can be better used for RT applications.


Effectively I would rather the GC run every second and spend a 
maximum of 1ms doing cleaning up(not necessarily finishing) 
instead of running when ever and potentially taking seconds to 
cleanup.


It's all about how to actually distribute the GC running in time. 
As it stands now, it can run anytime and take as long as it 
wants. I'd rather have it running continuously but not take as 
long as it wants. By allowing it to run continuously, in short 
bursts, one should get the same long term behavior but not have 
the spikes in cpu usage which prevent its usefulness in RT 
applications.






Re: Proposal : aggregated dlang git repository

2015-02-13 Thread Sativa via Digitalmars-d
On Tuesday, 10 February 2015 at 06:22:51 UTC, Andrei Alexandrescu 
wrote:

 Why? Why are so many of us dedicating so much energy to
tweaking what already works, instead of tackling real problems? 
Problems that e.g. - pardon my being pedantic - are in the 
vision document?




Because this is what people do! To avoid the bigger issues, which 
are hard, grueling, mind-numbing and without short term 
achievements.


The good news is that with proper management it all could be done:

1. Proper D IDE with great debugging, library, and resource 
management.

2. Fixing the GC/phobos mess
3. Cross-platform Gui and 3d graphics library
4. etc...
5. etc..
etc...

The great news is, that there are people working and willing to 
work on all these things. Also, most of them already have 
Solutions(not necessarily the best but people know what they 
needs to be done at least.


The problem?


Someone has to manage all this and see how all the pieces of the 
puzzle work together, knowing how each interact with each other, 
and guide those working on the individual parts as they so they 
can achieve their goal, which fills in large parts of the puzzle.



Basically someone needs to step up and become the tzar/overseer. 
Someone that spends their time managing everything and allowing 
the worker bee's to collect honey.


As it seems to me, most worker bee's are a bit confused on what 
they should/could be doing because there is not a clear and 
precise roadmap on what exactly needs to be done to reach the 
goal. (but there have been many maps made)


(It's not as bleak as I've implied, just saying there is 
significant room for improvement in efficiency, which seems to be 
the main problem from my perspective)





Re: Interfacing D to existing C++ code

2015-02-01 Thread Sativa via Digitalmars-d-announce

On Friday, 23 January 2015 at 11:04:12 UTC, Walter Bright wrote:


Mandatory reddit link: 
http://www.reddit.com/r/programming/comments/2tdy5z/interfacing_d_to_legacy_c_code_by_walter_bright/



There's been a lot of interest in this topic.


Interesting...

I wonder if two things could happen:

1. A tool could be written to generate the interfacing code in D 
from the C++ code?


2. If, in your stl example, the default argument could be 
automatically inferred from the mangling?


What I mean is, Do we really need to know the default arguments 
or are we just having to explicitly use them to make the name 
mangling work?


If it is the latter, then surely couldn't the D compiler sort of 
have a wild card type of default parameter where the compiler 
allows any such argument to work?


i.e., unless we are actually explicitly needed the default 
argument in some way it seems that we can just derive it's 
mangled version from the C++ object data and use that directly 
in the D mangled version?


Essentially the D compiler mangles what it can then substitutes 
the part of the mangled name in C++ for unknown things into the 
D mangled name.


I just see that if we are linking then why would one have to 
implement anything? Just essentially Copy the mangling from 
C++ object data.


3. Hopefully a mapping table is used instead of having to 
actually have implementations for very compiler?


e.g.,

essentially implement a standard ABI in D, map each C++ compilers 
mangling version to that instead of implementing each C++ 
compilers mangling.


It seems that 99% of the problem is translation in nature and 
that one can auto generate D interfaces from C++ mangled names. 
(in fact, one could go a step further then and simply reverse 
C++ object code into D code)






Re: Overload using nogc

2014-11-23 Thread Sativa via Digitalmars-d
On Saturday, 22 November 2014 at 16:56:58 UTC, Ary Borenszweig 
wrote:

On 11/21/14, 12:36 AM, Jonathan Marler wrote:

Has the idea of function overloading via nogc been explored?

void func() @nogc
{
// logic that does not use GC
}
void func()
{
// logic that uses GC
}
void main(string[] args) // @nogc
{
// if main is @nogc, then the @nogc version of func
// will be called, otherwise, the GC version will be
func();
}

This could be useful for the standard library to expose 
different
implementations based on whether or not the application is 
using the GC.


If you have a version that doesn't use the GC, what's the 
reason to prefer one that uses it?


because, say, you can have two versions! Is that not good enough 
for you?


e.g., you can later on write a nogc version and simply switch 
between the two implementations by toggling the attribute on main.


Benefits? Easy:

1. It allows you to update your library to handle nogc 
progressively
2. It allows you to more easily debug your code. If it works with 
@gc and not @nogc then obviously it's in the @nogc code.
3. It reduces code bloat and confusion because one doesn't have 
to have multiple names of the same function floating around. 
e.g., allocate_gc, allocate_nogc_heap, etc.


The problem here is that such a rule chooses one or the other 
which reduces its usefulness. If one could simply request nogc 
functions, say, then


void main() @request(@nogc(heap))
{
allocate();
other_overload();
}

could attempt to use all @nogc overloads, in this case, the above 
expands to essentially


void main() @request(@nogc(heap))
{
allocate_nogc_heap(); //
other_overload_gc();   // assuming it exists, if not, it will 
use whatever overloads exist that

}

But with all this added complexity I don't think one gets a huge 
benefit in this case. Simply making the compiler GC agnostic and 
by using allocators, I think one gets basically the same 
functionality with more clarity.


It may work in other cases though. Essentially the concept is to 
extend overloading to work with attributes.


e.g.,

void func(...)
void func(...) @attr1
void func(...) @attr2

are all overloads with the attributes also being overloaded.

func@random(...)

calls a random overload(chosen at compile time).

func@request(@attr1)(...)

attempts to call func with @attr1 but falls back on the 
non-overloaded ones, etc.


func@attr1(...) calls the function with @attr1 or fails if it 
doesn't exist.


Ultimately, having such a metadata system helps partition similar 
functions in a program and then do something more complex.


Not sure if it would actually be that useful since we can 
effectively already do this type of stuff, albeit with more work 
and confusion.








Efficient file search and extract

2014-11-04 Thread Sativa via Digitalmars-d-learn
Is there a very easy way to search a file for a string, then 
extract to a new file everything from that match on to the end of 
the file?


I basically want to remove a header from a file(it's length is 
not fixed though).


It seems I'm having to convert bytes to chars to strings and back 
and all that mess, which is not very elegant.


Re: DIP66 v1.1 (Multiple) alias this.

2014-11-02 Thread Sativa via Digitalmars-d

class A : B
{
   B b;
   alias b this; //Error: super type (B) always hides 
aliasthised type B because base classes should be processed 
before alias this types.

}



This isn't an error? Just because A inherits B doesn't mean that 
the alias is wrong?


e.g., If you have

class A1 { B b; alias b this; }

and

class A2 : B { B b; alias b this; }

A2 is just A1 in all forms. Whats great about it is that it 
allows easy composition patterns. (that is, A1 has essentially 
implemented B through the alias)


e.g., we can reassign b to a different behavior without having to 
write any boil code. It allows one to decompose classes and their 
implementations in a natural way.


Of course, it would require the alias this to be processed before 
the base class.


I think that it would be worth the alias to override the 
inheritance because it makes things easier:



class A : BBB { BB b; alias b this; }

If BBB and BB are related in some complex way the compiler has to 
deduce this. e.g.,


class BBB : Q!B { }

where Q!B is some template that is a superclass of B.

But if the alias is processed first, it won't matter.


* Regarding the lookup, opDispatch shouldn't come before alias 
this, or should come before base class lookup. Essentially 
alias this is subtyping so it should enjoy similar privileges 
to base classes. A different way to look at it is opDispatch 
is a last resort lookup mechanism, just one step above the 
UFCS lowering.


I agree with this suggestion, however it breaks an existing 
code.
opDispatch shouldn't come before base type lookup, because it 
will hide basic methods like toString.
opDispatch may come after alias this lookup, however it will 
fundamentally change program behaviour.


Why can't you simply have opDispatch call the base class lookup 
if all others fail?


It seems to me that one should have

alias this  opDispatch  class  base class(es)

But each one has a fall through mechanism.

e.g., if someone overrides toString in opDispatch it will call 
their function, if not, it gets passed to the bass class tostring.


Why should it work this way? Because alias this and opDispatch 
are user defined. The user knows why they are doing and the 
compiler doesn't get in the way by preventing them from doing 
things they want to do.


The compiler essentially fixes up all the missing connections 
that it can but never forcing connections the user may not want.



Basically all one needs is something like

bool opAliasDispatch(...)
{
   if (...) { ... return true; } // tries to dispatch
   return false;
}


bool opDispatch(...)
{
   if (...) { ... return true; } // tries to dispatch
   return false;
}

bool opClassDispatch(...)
{
   if (...) { ... return true; } // tries to dispatch
   return false;
}

bool opBaseDispatch(...)
{
   if (...) { ... return true; } // tries to dispatch
   return false;
}

Then a master dispatcher is

bool opMasterDispatch(...)
{
   return opAliasDispatch(...) || opDispatch(...) || 
opClassDispatch(...) || opBaseDispatch(...);

}



This makes it easier to add new dispatchers in the future, etc.

Also, if you are worried about backwards compatibility, just 
create a command line switch to select one method over another. 
Easy and it doesn't force everyone to be stuck with a suboptimal 
solution just for backwards compatibility(which is the scourge 
of humanity... We have to move forward, not be stuck in the 
past!!!).




Re: std.parallelism curious results

2014-10-05 Thread Sativa via Digitalmars-d-learn
Two problems, one, you should create your threads outside the 
stop watch, it is not generally a fair comparison in the real 
world. It throws of the results for short tasks.


Second, you are creating one thread per integer, this is bad. Do 
you really want to create 1B threads when you only have probably 
4 cores?


Below there are 4 threads used. Each thread adds up 1/4 of the 
integers. So it is like 4 threads, each adding up 250M integers. 
The speed, compared to a single thread adding up 250M integers, 
shows how much the parallelism costs per thread.


import std.stdio, std.parallelism, std.datetime, std.range, 
core.atomic;


void main()
{   
StopWatch sw;
shared ulong sum1 = 0, sum2 = 0, sum3 = 0, time1, time2, time3;

auto numThreads = 4;
ulong iter = numThreads*10UL;


auto thds = parallel(iota(0, iter, iter/numThreads));

sw.start();
	foreach(i; thds) { ulong s = 0; for(ulong k = 0; k  
iter/numThreads; k++) { s += k; } s += i*iter/numThreads; 
atomicOp!+=(sum1, s); }

sw.stop(); time1 = sw.peek().usecs;



	sw.reset();	sw.start();	for (ulong i = 0; i  iter; ++i) { sum2 
+= i; } sw.stop(); time2 = sw.peek().usecs;


writefln(parallel sum : %s, elapsed %s us, sum1, time1);
writefln(single thread sum : %s, elapsed %s us, sum2, time2);
writefln(Efficiency : %s%%, 100*time2/time1);
}

http://dpaste.dzfl.pl/bfda7bb2e2b7

Some results:

parallel sum : 780, elapsed 3356 us
single thread sum : 780, elapsed 1984 us Efficiency : 59%


(Not sure all the code is correct, the point is you were creating 
1B threads with 1B atomic operations. The worse possible 
comparison one can do between single and multi-threaded tests.





Re: std.parallelism curious results

2014-10-05 Thread Sativa via Digitalmars-d-learn

On Sunday, 5 October 2014 at 21:25:39 UTC, Ali Çehreli wrote:
import std.stdio, std.cstream, std.parallelism, std.datetime, 
std.range, core.atomic;


void main()
{   
StopWatch sw;
	shared ulong sum1 = 0; ulong sum2 = 0, sum3 = 0, time1, time2, 
time3;


	enum numThreads = 4; // If numThreads is a variable then it 
significantly slows down the process

ulong iter = 100L;
	iter = numThreads*cast(ulong)(iter/numThreads); // Force iter to 
be a multiple of the number of threads so we can partition 
uniformly


	auto thds = parallel(iota(0, cast(uint)iter, 
cast(uint)(iter/numThreads)));


sw.reset(); sw.start();
	foreach(i; thds) { ulong s = 0; for(ulong k = 0; k  
iter/numThreads; k++) { s += k; } s += i*iter/numThreads; 
atomicOp!+=(sum1, s); }

sw.stop(); time1 = sw.peek().usecs;



	sw.reset();	sw.start();	for (ulong i = 0; i  iter; ++i) { sum2 
+= i; } sw.stop(); time2 = sw.peek().usecs;


writefln(parallel sum : %s, elapsed %s us, sum1, time1);
writefln(single thread sum : %s, elapsed %s us, sum2, time2);
if (time1  0) writefln(Efficiency : %s%%, 100*time2/time1);
din.getc();
}

Playing around with the code above, it seems when numThreads is 
an enum, the execution time is significantly effected(that from 
being  100% to being 100% efficiency).


results on a 4 core laptop with release builds:

parallel sum : 4950, elapsed 2469 us
single thread sum : 4950, elapsed 8054 us
Efficiency : 326%


when numThreads is an int:

parallel sum : 4950, elapsed 21762 us
single thread sum : 4950, elapsed 8033 us
Efficiency : 36%


Re: std.parallelism curious results

2014-10-05 Thread Sativa via Digitalmars-d-learn

On Sunday, 5 October 2014 at 21:53:23 UTC, Ali Çehreli wrote:

On 10/05/2014 02:40 PM, Sativa wrote:

  foreach(i; thds) { ulong s = 0; for(ulong k = 0; k 
 iter/numThreads; k++)

The for loop condition is executed at every iteration and 
division is an expensive operation. Apparently, the compiled 
does some optimization when the divisor is known at compile 
time.


Being 4, it is just a shift of 2 bits. Try something like 5, it 
is slow even for enum.


This solves the problem:

const end = iter/numThreads;

for(ulong k = 0; k  end; k++) {

Ali


Yes, it is a common problem when doing a computation in a for 
loop on the bounds. Most of the time they are constant for the 
loop but the compiler computes it every iteration. When doing a 
simple sum(when the loop does not do much), it becomes expensive 
since it is comparable to what is happening inside the loop.


It's surprising just how slow it makes it though. One can't 
really make numThreads const in the real world though as it 
wouldn't optimal(unless one had a version for each number of 
possible threads).


Obviously one can just move the computation outside the loop. I 
would expect better results if the loops actually did some real 
work.





Re: Mono corrupted D files

2014-09-03 Thread Sativa via Digitalmars-d

On Wednesday, 3 September 2014 at 21:13:31 UTC, AsmMan wrote:
Something very strange happened 2/3 days ago. Two of my D files 
of the project I was working on got all values replaced by 0 
(that's what I seen rather D code if I open the file with a hex 
debugger). The file size of both files keep intact although. 
And no, I have no backup of these files. I had a old copy of it 
on a external hard drive but I needed to format it to use in 
something else and didn't put my files before it...


Instead of turn off my windows machine I always hirbenate it 
and left open all stuff and then I just back quickly to point 
where I was on. That day, when I logged on system I noticied 
first non-usual behavior: the machine looked like I had 
restarted it instead of hibernate. All stuff I left open 
(including mono) wasn't open anymore. I find it strage but 
moved on. But to my surprise when I open mono, the recent 
projects always available on left menu bar was empty. Just 
like I had installed mono not used yet. I open my project 
directly by clicking on open and navigating to folder of 
projec and then I see the two of main project files with a 
values set to zero.


Can some Mono expected help me?
My question is: can I recovery these files? or what remains to 
me is cry?
restore the system didn't helped (and I neither expected to but 
I tried)


Not sure if it is related: that day my machine had no a network 
connection.


You have probably already lost the data, but it is possible that 
a different copy of the file is located on the drive. If you've 
restored a backup you are probably screwed.


Sometimes programs will store files in a temp directory or when 
they save the file it won't overwrite the old one, but delete it 
and make a new one.


These can be recovered in many cases as long as the data hasn't 
be overwritten by other files. They may not exist on the file 
system but in the free space, you have to use an undelete program 
that does a low level scan of the file system for deleted files.







One Stop Shop?

2014-08-30 Thread Sativa via Digitalmars-d
I think it would be helpful to have the d lang site host 
tutorials/lessons on various aspects of D. D is hard to use for 
certain things like gui's, graphics(ogl, dx, etc), etc... not 
necessarily because D can't do these things but because the 
information is not out there.


If The D site hosted, sort of like a wiki, but with properly 
designed articles about how to do things in D, maybe more people 
would start to use it?


For example,

I'd like to get into graphics and gui programming using D. I 
could use something like C#.NET which would make life easier but 
I don't like the drawbacks of C#.


But trying to find coherent information, say, to draw an image on 
screen using opengl and D is nearly impossible... and way too 
time consuming, compared to finding similar information using 
C/C++ or most other popular languages.


Getting D to work in these scenarios are already complicated 
enough. Usually it relies on the work of one person who hasn't 
made it clear how to do it well. I know D can do graphics, I've 
seen it done... I've even read some tutorials on it... but 
nothing is clear, relevant, or updated.


Having a quick way to access stuff in the categories like

Sound(playing sounds, VST design)

Graphics(Gui, 2D/3D using open GL, etc...)

etc...


e.g., suppose I want to create a vst in D... google D lang vst, 
and the only relevant site that comes up is:


http://le-son666.com/software/vstd/

Woo hoo! looks like someone did the work for us!

But seriously? There is just enough information to hang myself. 
Do I really want to go down this path and potentially waste 
countless hours trying to get something to work that might not?


I feel many others go through the same thought processes.

If there was a wiki like site for D that is based on tutorials 
and experiences/results then surely it would help a lot of 
people. If people could contribute there problems or solutions in 
a unified and connected way, it would be easier to find the 
relevant information than it is now.


About 95% of the time when I search for something that I want to 
do in D, I get a forum post... and about 15% of the time it 
actually leads to something useful. Maybe bout 5% of the time it 
actually solves my problem.


There is just so much junk out there and all the gems are lost. 
Most of the gems need some polishing to show their true beauty.