Re: How about a special null template parameter?

2016-08-20 Thread poliklosio via Digitalmars-d

On Friday, 19 August 2016 at 18:25:06 UTC, Engine Machine wrote:

So we can create types relationships easier:

class Type(T) : Type!null
{
   int x;
   static if (T is Dog)
   int y;
}

Type == Type!null (!null implicit, only if defined in the above 
way. Essentially an alias is created for us automatically)


This syntax would be very confusing to a non-expert. It is a 
special case of an existing features (inheritance and templates), 
which makes it hard to learn about, as all resources are going to 
discuss those other features first, and only the most detailed 
readings are going to contain this detail. Moreover, it does not 
introduce a keyword or any other name, so it is almost impossible 
to Google. Try googling something like "template class a 
class b: public a", and see which result contains "curiously 
recurring template pattern". It is hard to find, isn't it?


Moreover, as indicated by another poster, null is already a valid 
template parameter, making it even more confusing. Also it is 
unclear when the interpretation would be as you propose.


You need to appreciate the difference between write-only code and 
code that can be easily read, reviewed and understood.


Dlang has already went too far in inventing pieces of non-obvious 
syntax for template features. Lets not make the situation worse.


Distill what you want to do, see what use cases are covered by 
other features and libraries, name your thing accordingly and 
then propose.


Re: DIP1000: Scoped Pointers

2016-08-11 Thread poliklosio via Digitalmars-d-announce

On Wednesday, 10 August 2016 at 20:35:23 UTC, Dicebot wrote:

The first DIP has just landed into the new queue. It is a (...)


The wording is unclear in the section "Implicit Conversion of 
Function Pointers and Delegates".


It says "scope can be added to parameters, but not removed."

The trouble is that the word "added" can be understood in two 
opposite ways.


When you assign an address to a pointer variable, you can say 
that:


- you are assigning a new runtime value, so you are adding scope 
of parameter of the right hand side to the otherwise unscoped 
parameter of the left hand side, e.g.


alias T function(T) fp_t;
T bar(scope T);
fp_t   fp =// Ok

- you are converting to a new variable, so you are adding scope 
of param of the left hand side to an otherwise unscoped param of 
the right hand side, e.g.


alias T function(scope T) fps_t;
T foo(T);
fps_t  fp =// Error


Half-jokingly, I think its good to recognize that there is an 
ambiguity before multiple authors implement things with different 
assumptions and have a month-long discussion on github. :)


More seriously, this wording may make its way to documentation 
after the design is implemented.


If wording is fixed, I suggest also checking if all examples are 
correct.


Re: For the Love of God, Please Write Better Docs!

2016-08-06 Thread poliklosio via Digitalmars-d

On Friday, 5 August 2016 at 21:01:28 UTC, H.Loom wrote:

On Friday, 5 August 2016 at 19:52:19 UTC, poliklosio wrote:

On Tuesday, 2 August 2016 at 20:26:06 UTC, Jack Stouffer wrote:

(...)

(...)
In my opinion open source community massively underestimates 
the importance of high-level examples, articles and tutorials. 
(...)


This is not true for widely used libraries. I can find 
everywhere examples of how to use FreeType even if the bindings 
I use have 0 docs and 0 examples. Idem for libX11...Also i have 
to say that with a well crafted library you understand what a 
function does when it's well named, when the parameters are 
well named.


This is another story for marginal libraries, e.g when you're 
part of the early adopters.


I think we agree here. Most libraries are marginal, and missing 
proper announcement and documentation is the main reason they are 
marginal. Hence, this is true for most libraries.
Of course there are good ones, sadly not many D libraries are 
really well documented.


If you are a library author and you are reading this, let me 
quantify this for you.


Thanks you ! you're so generous.


Why the sarcasm? I was just venting (at no-one in particular) 
after hitting the wall for a large chunk of my life.



(...)

This also explains part of complaints on Phobos documentation 
- people don't get the general idea of how to make things work 
together.


For phobos i agree. D examples shipped with DMD are ridiculous. 
I was thinking to propose an initiative which would be to renew 
completly them with small usable and idiomatic programs.


Those who do get the general idea don't care much about the 
exact width of whitespace and other similar concerns.


I don't understand your private thing here. Are you talking 
about text justification in ddoc ? If it's a mono font no 
problem here...


I'm lost here. The "width of whitespace" was just an example of 
something you would NOT normally care about if you were a savvy 
user used who already knows how to navigate the docs.


Re: For the Love of God, Please Write Better Docs!

2016-08-05 Thread poliklosio via Digitalmars-d

On Tuesday, 2 August 2016 at 20:26:06 UTC, Jack Stouffer wrote:

(...)


I cannot agree more with that.

In my opinion open source community massively underestimates the 
importance of high-level examples, articles and tutorials.
To most library authors (judging from projects I've seen) source 
code and reference docs is 90% of usability and everything else 
is 10%. This is completely wrong and upside-down!


If you are a library author and you are reading this, let me 
quantify this for you.


Influence of parts of documentation on usability:
- Good tutorials and high-level examples for typical actual 
use-cases: 60%
- Good README file with very clear instructions for installation 
and any other setup, e.g. interoperation with typical set of 
other libs: 25%

- An article in prose explaining motivation for the library: 10%
- Examples in online reference documentation: 3%
- All the other elements reference documentation, source code 
readability: 2%


This has no scientific basis, but I felt compelled to use numbers 
because words cannot illustrate the massive extent of the problem.


Again, if you are a library author and you think this rant is 
silly then yes, your documentation is that bad.


This also explains part of complaints on Phobos documentation - 
people don't get the general idea of how to make things work 
together. Those who do get the general idea don't care much about 
the exact width of whitespace and other similar concerns.


Re: Examples of dub use needed

2016-06-21 Thread poliklosio via Digitalmars-d

On Tuesday, 21 June 2016 at 08:58:59 UTC, Dicebot wrote:
But it is indeed supposed to be rare case and I do recommend to 
install such tools via system package manager instead whenever 
possible. So yes, you don't need `dub fetch` at all for most 
part.


Yes, it is best not to interfere with system package managers if 
they exist. On the other hand how do you install end-user tools 
like dcd, dfmt on windows, other than git clone + build?


I'm wondering why are packages updated so rarely on the dub 
repository?

For example I have dfmt 0.5.0, while dub fetch installs 0.4.5



Re: Examples of dub use needed

2016-06-21 Thread poliklosio via Digitalmars-d

On Tuesday, 21 June 2016 at 06:30:11 UTC, Jacob Carlborg wrote:

On 2016-06-21 06:42, Guido wrote:
Dub doesn't really work like other package managers. When I 
load a package:


dub fetch scriptlike

It stores it someplace and doesn't say where. You have to run 
'dub list'
to figure out where. That's is very different than other 
packages. It

deserves a bigger mention in the meager documentation.


Dub doesn't install packages. It's not a tool meant for end 
users. If you want something installed in the traditional sense 
it should go in the system package manager.


Wow, really?
Then what is the fetch command for? I started using dub a 
recently (2 months ago) and totally didn't notice that there is 
any other purpose of the fetch command. I even installed dcd, 
dfmt and dscanner through dub fetch, only to find out these were 
older versions which didn't work.

So what is the purpose of dub fetch?



Re: Optimizations and performance

2016-06-18 Thread poliklosio via Digitalmars-d
On Friday, 10 June 2016 at 08:32:28 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 9 June 2016 at 18:45:48 UTC, poliklosio wrote:
On Thursday, 9 June 2016 at 10:00:17 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 9 June 2016 at 07:26:16 UTC, poliklosio wrote:
First of all, there is not much point optimizing the 
language for people who are capable of optimizing everything 
to the extreme themselves. D already has as much power as 
C/C++ for them.


No... not if you are talking about specific compilers.


Get the logic right. The correct statement is:
"Yes... not if you are talking about specific compilers."


«D already has as much power as C/C++ for them.»

No (D does not already has as much power as C/C++ for them.)... 
not if you are talking about specific (C/C++) compilers.


Please don't twist my words. :-)


I was thinking that you compared D compilers (dmd vs ldc). If you 
count things like Intel compiler for C/C++, then yes, C++ 
provides more speed.

If that is what you meant, yes, I'm sorry I twisted your words.

But then this becomes a purely academic discussion because only 
financially secure companies in rich countries are going to use 
something as expensive as Intel compiler, and only for some very 
important high-performance projects. Once you talk about spending 
thousands of dollars on a piece of software that can be replaced 
with a free alternative, there are million other ways to spend 
the money **better**, for example hiring more people or better 
people, or paying for longer development, or paying for porting 
algorithms to GPUs or buying better CPUs (if software is to be 
run in house). So then your point has no relevance to optimizing 
the default D performance. It only would if you had to pay 
thousands of $ for using D tools.


If you didn't have Intel compiler in mind, then correct me again 
as I don't have a crystal ball and cannot know what specific 
thing you mean.


Re: I'd love to see DScript one day ...

2016-06-11 Thread poliklosio via Digitalmars-d

On Friday, 10 June 2016 at 22:01:53 UTC, Walter Bright wrote:

On 6/10/2016 3:55 AM, Chris wrote:
> Cool. I'd love to see `DScript` one day - and replace JS once
and for all ...
> well ... just day dreaming ...

Dreams are reality:

https://github.com/DigitalMars/DMDScript



You have 2 readme files, so on github it looks like you have only 
one very short one (README.md) and that makes a poor impression.


Re: Optimizations and performance

2016-06-09 Thread poliklosio via Digitalmars-d
On Thursday, 9 June 2016 at 10:00:17 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 9 June 2016 at 07:26:16 UTC, poliklosio wrote:
First of all, there is not much point optimizing the language 
for people who are capable of optimizing everything to the 
extreme themselves. D already has as much power as C/C++ for 
them.


No... not if you are talking about specific compilers.


Get the logic right. The correct statement is:
"Yes... not if you are talking about specific compilers."




Re: Optimizations and performance

2016-06-09 Thread poliklosio via Digitalmars-d

On Thursday, 9 June 2016 at 07:02:26 UTC, default0 wrote:

On Thursday, 9 June 2016 at 06:51:53 UTC, poliklosio wrote:

On Thursday, 9 June 2016 at 01:46:45 UTC, Dave wrote:

On Wednesday, 8 June 2016 at 22:32:49 UTC, Ola Fosheim Grøstad
I wouldn't put too much emphasis on that benchmark as the 
implementations appear different? Note that Felix compiles 
to C++, yet beats C++ in the same test? Yes, Felix claims to 
do some high level optimizations, but doesn't that just tell 
us that the C++ code tested wasn't optimal?


Languages should be fast by default. I always find it missing 
the point when people go crazy during these benchmarking 
tests trying to make the code as fast as possible by tweaking 
both the code and the compiler flags. Go through that effort 
with a 2 million line app. Tell me how long that takes.


YES +1000


I agree with the sentiment, but you often do want to spend time 
finding hot paths in your app and optimizing those. In a 2 
million line app, there will not be 2 million lines of code in 
the hot path. So figuring out how many tricks you can do to get 
something going fast does have quite a bit of value, even for 
large apps.


First of all, there is not much point optimizing the language for 
people who are capable of optimizing everything to the extreme 
themselves. D already has as much power as C/C++ for them.
Second, yes, anyone should excercise some basic optimization 
attempts before calling a language slow. Yet, not everyone will 
come up with the same ways to optimize and some people will 
invariably fail. Moreover, the same person may fail, depending on 
what he was writing the week before and how much coffee he had. 
But even assuming that you optimize some hot paths, there are 
going to be the rest of the code that is slow if the language 
leads you to slow solutions by default.


More importantly, the slow software is usually a product of 
write-it-ASAP mentallity, and of people who are *not* good 
programmers. Optimizing their products automatically means that 
80% of software that you use everyday (photoshop plugins, office 
programs, mobile apps) is going to either run much faster or get 
released to the marked sooner, which is a big selling point for 
the language.




Re: Optimizations and performance

2016-06-09 Thread poliklosio via Digitalmars-d

On Wednesday, 8 June 2016 at 22:19:47 UTC, Bauss wrote:
D definitely needs some optimizations, I mean look at its 
benchmarks compared to other languages: 
https://github.com/kostya/benchmarks


I believe the first step towards better performance would be 
identifying the specific areas that are slow.


I definitely believe D could do much better than what's shown.

Let's find D's performance weaknesses and crack them down.


Please, ignore the non-actionable advice of those who criticise 
benchmarks. :) Benchmarks are the way to go as there is no better 
way to optimize things.
Also, given enough of them, they really **are** representative of 
the typical runtime of any program because the numbers of bugs in 
different language implementations even out over the whole set of 
benchmarks. This is particularly true if the authors of D code 
are familiar with the D way of doing things, but even if its not 
the case, the language shouldn't punish beginners too much for 
writing the simplest solution that could possibly work for them.


I think a much better benchmark for actually optimizing would 
stress performance of D in a comprehensive set of corner cases 
(hundreds), because then, during the optimizations, if you tweak 
something in D to get increased performance in one case and 
accidentally slow something down in another case, you will notice 
that you slowed somthing else down.

I would start with something like this if I was optimizing D.


Re: Optimizations and performance

2016-06-09 Thread poliklosio via Digitalmars-d

On Thursday, 9 June 2016 at 01:46:45 UTC, Dave wrote:

On Wednesday, 8 June 2016 at 22:32:49 UTC, Ola Fosheim Grøstad
I wouldn't put too much emphasis on that benchmark as the 
implementations appear different? Note that Felix compiles to 
C++, yet beats C++ in the same test? Yes, Felix claims to do 
some high level optimizations, but doesn't that just tell us 
that the C++ code tested wasn't optimal?


Languages should be fast by default. I always find it missing 
the point when people go crazy during these benchmarking tests 
trying to make the code as fast as possible by tweaking both 
the code and the compiler flags. Go through that effort with a 
2 million line app. Tell me how long that takes.


YES +1000




Re: Andrei's list of barriers to D adoption

2016-06-07 Thread poliklosio via Digitalmars-d

On Tuesday, 7 June 2016 at 08:57:33 UTC, Russel Winder wrote:
On Mon, 2016-06-06 at 19:57 +, poliklosio via Digitalmars-d 
wrote:

[…]

I should have been more specific here. I mean I want to 
elliminate GC in my code. I don't mind if you or anyone else 
uses GC. Even I use GC languages when writing things like 
scripts, so I'm not a no-GC-because-I-say-so person.


Indeed. D has a GC by default which can be switched off. This 
is good. That D needs a better GC is an issue.


For me a much bigger issues are:
- Smaller power of the language with GC switched off.
- Mixing GC and noGC is still experimental thing that few experts 
know how to do properly.


Better GC is not a bigger issue for me as I'm not going to use 
much of it.
Better GC is of course advantageous for adoption, I just have a 
strong impression that there are more important things, like 
nailing easy setup of editors, and providing a guide for 
programming **without** the GC and for mixing GC and NoGC.


You have to distinguish switching GC off (that implies 2 
languages, 2 communities, 2 separate standard libraries, all with 
some overlap) from being able to mix GC and non-GC code in the 
same program. The problem is that AFAIK the second is not a 
viable methodology outside a tightly controlled environment, 
where you select used libraries very carefully and limit their 
number.



(...)
My feeling is there is no point in over-thinking this, or being 
abstract. C++ can have a GC but doesn't. Rust can have a GC but 
doesn't. Python has a GC. Go has a GC. Java has a GC. D has a 
GC that you can turn off. That seems like a USP to me. Whether 
this is good or bad for traction is down to the marketing and 
the domain of use.


I'm trying to be as far from abstract as I can. Having GC is 
hardly a unique selling point. As for switching it off, see 
issues above. After they are solved to a point when experts can 
get noGC stuff done easily, it will be a USP.


D's power is in its native-but-productive approach. This is an 
improvement in C++ niches, not a simplified language for 
banging end-user apps.


Productive is way, way more important that native.


For some people native is necessary. For them D is the way to get 
productive. Others ideally would use D as well but currently 
there are more productive options for them, like C# or python.



[…]


Why would they not use D? D is a much better language for them 
as well. To give some examples, in C++ code there is a ton of 
boilerplate, while D code hardly has any. Also, the number of 
bugs in a D program is smaller due to easier unittesting. 
Also, templates don't cause day-long stop-and-learn sessions 
as in C++. I don't think those people are a lost market.


Can we drop the unit and just talk about testing. Unit, 
integration and system testing are all important, focusing 
always on unit testing is an error.


There's nothing wrong with discussing unittesting on its own. In 
fact, this is very relevant because its the unittesting that D 
makes easier. More coarse-grained testing can be done as easily 
in any other language - you just make some executables for 
various subsystems and variants of your program and run them in 
test scenarios.


As to why not use D? The usual answer is that "no else is using 
D so we won't use D" for everyone except early adopters.


D needs to remanufacture an early adopter feel. It is almost 
there: LDC announcing 1.0.0, Dub getting some activity, new 
test frameworks (can they all lose the unit please in a 
renaming), rising in TIOBE table. This can all be used to try 
and get activity. However it needs to be activity of an early 
adopter style because there are so many obvious problems with 
the toolchains and developer environments. So let's focus on 
getting these things improved so that then the people who will 
only come to a language that has sophistication of developer 
experience.


As long as those are improvements in getting started fast and 
time-to-market for D apps, than yes, and that's probably 10 times 
more important than the both slow GC and poor noGC experience.


This is a big issue now due to lack of a comprehensive guide, 
as well as holes in the language and phobos (strings, 
exceptions, delegates). C++ doesn't have those holes.


Holes in Phobos can be handled by having third-party things in 
the Dub repository that are superior to what is in Phobos.


I don't think that third-party libraries can have the reach of 
Phobos libraries. Also, things as basic as strings should be in 
the standard library or language, otherwise the whole idea of 
using D looks ridiculous. Having said that, third-party libraries 
can help.


Documentation, guides, and tutorials are a problem, but until 
someone steps up and contributes this is just going to remain a 
problem. One that will undermine all attempts to get traction 
for D. So how to get organizations to put resource into doing 
this?


I think those things can be e

Re: Andrei's list of barriers to D adoption

2016-06-07 Thread poliklosio via Digitalmars-d

On Monday, 6 June 2016 at 02:20:52 UTC, Walter Bright wrote:
Andrei posted this on another thread. I felt it deserved its 
own thread. It's very important.

-
(...)
* Documentation and tutorials are weak.


Regarding documentation, I'll just point out something that 
people seem not to understand here. I think the complaints about 
bad docs are not really about accuracy of what is displayed on 
the screen. I think accuracy is very good. They are more about 
usefulness and discoverability.
Its just the amount of cruft that a user has to go through to 
find out what exists and how he can apply it.


I'll pick on string documentation as an example. Other concepts 
will have different issues.


As a concrete example, how good is
https://docs.python.org/2/library/stdtypes.html#str.find
in comparison with
https://dlang.org/phobos/std_string.html#.indexOf
?

I think python version has several advantages:
- It is more conscise due to uniform handling of single char and 
string.
- It is more conscise due to not displaying a lot of pointless 
cruft that the user doesn't care about.

- It is more discoverable due to the name "find".
- It is more discoverable due to being a method of str type.

Problems with the D version are:
- There is heavy use UFCS, so things are split into different 
modules. String is an array, which means that UFCS must be used 
to extend it.
- The whole deal with dchar and unicode vocabulary all over the 
place, which adds to the amount of information. It would be much 
better to just assume correct Unicode handling and concentrate on 
how is a function different from other functions, as this is what 
user needs. Yes, details are good, but
- Strong typing in D has a lot of concepts that a newcomer has to 
learn before docs stop causing anxiety.

- The type constrains add visual noise.
- For the type constraints, it is hard to find what is the 
intention of their existence. It is typically not spelled out in 
English language.


Can something like this be made for the D version? I claim it 
can. We just have to drop the compulsion to document every detail 
and generate a simplified version of the docs.
I think such a simplified, python-like documentation could live 
as yet another version of docs that concentrates on usage rather 
than definition. It could contain links to the full version. :)


And, as a cherry on the cake, what is the first thing that user 
sees in the D version?

"Jump to: 2"
It is not readable at all. What is 2? Second what? What is the 
difference between the 2 things? I had to squint hard to find out 
(I wasn't lucky enough to read the first line of description 
first when I was finding out the difference. I started by looking 
at the function signatures). Those things should be described in 
english language somehow.


And regarding cheatsheets, as pointed out before, they don't work 
as a discoverability aid. You really need 2 or 3 sentences to 
tell what a fucntion does. A line of cryptic code that presents 
*one* data sample doesn't work.


Re: Andrei's list of barriers to D adoption

2016-06-06 Thread poliklosio via Digitalmars-d

On Monday, 6 June 2016 at 15:06:49 UTC, Carl Vogel wrote:
(...) Also, the idea that more people will adopt D if you just 
"get rid of the GC" ignores the fact that you don't just "get 
rid of the GC," you replace it with another memory management 
scheme (manual, RAII, RC). If you replace all the parts of the 
language and phobos that rely on GC with, e.g., a poor 
implementation of RC, then you're just going to get another 
round of complaints, and no real adoption.


I think you are going to get some adoption it you replace good GC 
with clunky RC.
The key difference is a call to a function that uses GC is 
incomplete: some of it will execute later, after the call has 
finished.
On the other hand a call to an equivalent function that uses RC 
has only local (in time) consequences: once you finished the 
call, it stopped executing. If it returned something that needs 
to be freed later, you know what it is.
Of course people can write arbitrarily messed up things like 
singletons etc. but I'm not counting those because good libraries 
are usually free of those.
This means you have control over all the OTHER code, however 
inefficient the call is.
Hence, the second is acceptable in low-latency code, but the 
first is not.




Re: Andrei's list of barriers to D adoption

2016-06-06 Thread poliklosio via Digitalmars-d

On Monday, 6 June 2016 at 10:54:41 UTC, Guillaume Piolat wrote:

On Monday, 6 June 2016 at 08:18:20 UTC, Russel Winder wrote:

(...)
This should be marketed as a major feature of D: the language 
with a GC for those situations where you want it, and manual 
memory management for those cases where you do not want a GC.


Having the GC for the UI is very pleasant, while @nogc 
time-critical code won't use it.


It think the problem is that the message then become more 
complicated.
GC is easily victim of the "holier-than-thou" fallacy, because 
evidently less GC is supposed to translate into faster 
programs. Er... right?


I'm just worried how usable this approach really is at scale. If 
you combine 20 D libraries, 17 of which use GC, are you able to 
control the GC well enough for a low-latency app? The problem 
with GC is that its a global (per process) resource so it poisons 
everything in the program with its unpredictable time 
consumption. I would be hesitant to market this without designing 
and testing a viable development methodology first. And then 
there is reference counting which is another way to be productive 
that doesn't have this problem.


Re: Andrei's list of barriers to D adoption

2016-06-06 Thread poliklosio via Digitalmars-d

On Monday, 6 June 2016 at 08:42:55 UTC, Russel Winder wrote:
On Mon, 2016-06-06 at 06:50 +, poliklosio via Digitalmars-d 
wrote:


[…]



Please, elliminate GC.


Let's not. It is a USP against C++ and Rust. It forges a new 
route to traction, cf. Go, Java, etc.


I should have been more specific here. I mean I want to 
elliminate GC in my code. I don't mind if you or anyone else uses 
GC. Even I use GC languages when writing things like scripts, so 
I'm not a no-GC-because-I-say-so person.


Is it a unique selling point (USP) against C++ or Rust? I don't 
think so. People who use the GC languages for business/scientific 
apps don't care what is behind the scenes. Also, the relationship 
between GC and productivity is a subtle point that requires some 
CompSci background to grasp. I think D is far too complicated to 
be marketed as even simpler than python or Go. Low-latency people 
do care what is behind the scenes and they understandably want no 
GC. That leaves high-performance high-latency people. If you 
think you can find a niche there, fair enough, otherwise its not 
a USP.
D's power is in its native-but-productive approach. This is an 
improvement in C++ niches, not a simplified language for banging 
end-user apps.



This also hurts the open source community. Why would I
write/opensource a high performance library if I know that
projects like AAA games are not going to use it anyway due to 
GC
in D? On the other hand if I can write a library that 
guarantees

to not use and not need garbage collector then even C and C++
projects can use it.
With GC, D doesn't play nice with existing C/C++ code.


There may be some instances where this is the case. Let them 
use C++ or Rust. Fine. If AAA games people want C++ they will 
use C++, for D they are a lost market.


Why would they not use D? D is a much better language for them as 
well. To give some examples, in C++ code there is a ton of 
boilerplate, while D code hardly has any. Also, the number of 
bugs in a D program is smaller due to easier unittesting. Also, 
templates don't cause day-long stop-and-learn sessions as in C++. 
I don't think those people are a lost market.


And anyway D has a GC if you want and a no-GC if you want. Thus 
this is not actually an issue anyway.


This is a big issue now due to lack of a comprehensive guide, as 
well as holes in the language and phobos (strings, exceptions, 
delegates). C++ doesn't have those holes.




Re: Andrei's list of barriers to D adoption

2016-06-06 Thread poliklosio via Digitalmars-d

On Monday, 6 June 2016 at 04:38:15 UTC, Jack Stouffer wrote:

(...)
While I understand that some people can't afford a GC, this has 
confused me as well.


I never understood the large amount of people on /r/programming 
complaining about the GC when the vast majority of software is 
written in one of the following languages: C#, Java, PHP, 
Python, JavaScript. Those have to cover at least 80% of all 
software projects in the US and not only do they have a GC, 
they force you to use a GC. This just shows to me that 
/r/programming is not a representative sample of programmers at 
all.


The anti D's GC thing has become meme at this point. I have 
literally seen only one person on /r/programming complain about 
Go's GC, despite Go being a slower language overall.


People constantly raise the argument that some large fraction 
(e.g. 80%) of software in all languages is written with GC just 
fine. This is missing a few points:
- It is often not "just fine" even if they use it. Authors 
sometimes don't realize that GC would be a liability in their 
projects until its too late. Then they fight it. Also, people may 
be forced to use GC because libraries they need use GC.
- Most people don't actively want GC, they just want 
productivity. Whether its GC that gives it or something else, 
they don't care. If something else was providing productivity, 
people wouldn't care that its not GC. Cpython uses reference 
counding as its GC strategy. Do you think most people care?
- The minority of applications which cannot use GC is not 
necessarily also a minority in economic value or in the number of 
running copies. Most of all applications are usually one-off 
internal business apps or scientific experiments. Also for every 
10 programs there are probably 8 bad ones. Hence, the number of 
applications is a pretty silly metric. Note that non-GC 
applications are often multi-million dollar operating systems, 
AAA games, control software, AI software and server software.


Re: Andrei's list of barriers to D adoption

2016-06-06 Thread poliklosio via Digitalmars-d

On Monday, 6 June 2016 at 02:20:52 UTC, Walter Bright wrote:
* The garbage collector eliminates probably 60% of potential 
users right off.


Please, elliminate GC.
This also hurts the open source community. Why would I 
write/opensource a high performance library if I know that 
projects like AAA games are not going to use it anyway due to GC 
in D? On the other hand if I can write a library that guarantees 
to not use and not need garbage collector then even C and C++ 
projects can use it.

With GC, D doesn't play nice with existing C/C++ code.

* Tooling is immature and of poorer quality compared to the 
competition.


Quality is an issue, but I thing a bigger problem for adoption is 
just the time it takes a new user to set the dev environment up. 
If you look at those pages:

https://wiki.dlang.org/Editors
https://wiki.dlang.org/IDEs
most of the tools use dcd, dscanner and dfmt to provide the most 
important features like auto-completion, autoformatting etc.
The problem is that dcd, dscanner and dfmt are not bundled 
together so it takes a long time to actually install all this 
together. Note that discovering what is what also takes a lot of 
time for a new user.
I tried to do this recently and it took me 2 days before I found 
a combination of versions of dcd, dscanner, dfmt, dub and an IDE 
that work together correctly on Windows. My example is probably 
extreme but is a lesson.


May I suggest bundling the official tools with dcd, dscanner, 
dfmt and dub to create a Dlang-SDK.


Then the user would only have to install
- Dlang-SDK
- An editor
And everything would work. This would reduce the time from a few 
hours (with a very large variance) to 30 minutes. Then maybe 
people who try D in their free time without being strongly 
indoctrinated by Andrei will not quit after 30 minutes. :)


Re: throw Exception with custom message in nogc code

2016-06-05 Thread poliklosio via Digitalmars-d-learn

On Sunday, 5 June 2016 at 10:37:49 UTC, poliklosio wrote:
Also, exceptions are not necessarily for bugs. There may be 
used sometimes for bug handling when other things like static 
typing and assertions are not enough, but bug handling is not 
the core reason for havi ng exceptions in languages.


Actually, in D, things like assertions already behave like 
exceptions jumping up the call stack and generating stack traces 
so there should be no reason to ever use normal exceptions to 
handle programmer mistakes. I like this language a little bit 
more every day I use it.


Re: throw Exception with custom message in nogc code

2016-06-05 Thread poliklosio via Digitalmars-d-learn

On Sunday, 5 June 2016 at 06:25:28 UTC, HubCool wrote:

(...)
But I'd say that the leak doesn't matter. Either the soft has a 
very small problem that happens once eventually, otherwise it's 
a big bug and new exceptions will come so often that the 
program has to be killed immediatly.


+--+
auto leakAnoGcException(T, A...)(A a) @nogc
if (is(T: Exception))
{
import std.experimental.allocator.mallocator;
import std.experimental.allocator;
return make!T(Mallocator.instance, a);
// eventually stores them ona stack that you can free in 
static ~this()

}

void main() @nogc
{
bool ouch;
class MyException: Exception {this(string m) @nogc 
{super(m);}}

try throw leakAnoGcException!MyException("ouch");
catch (Exception e) {ouch = true;/*can dispose here 
too...*/}

assert(ouch);
}
+--+


I like your solution, as it doesn't force to allocate exception 
objects statically, which is a step forward.


On the other hand I don't think that the leak doesn't matter. 
This would only be the case if one could guarantee that only a 
small, constant number of exceptions is thrown during program 
execution. This is not generally the case. Also, exceptions are 
not necessarily for bugs. There may be used sometimes for bug 
handling when other things like static typing and assertions are 
not enough, but bug handling is not the core reason for havi ng 
exceptions in languages.
Exceptions is a control flow construct for use in events which 
occur rarely (relative to normal execution of surrounding code) 
and require jumping many levels up the call stack. That's all 
there is to them. Its not some philosophical concept that depends 
on the definition of errors or bugs.


Can you elaborate on how to dispose the exception?
I'm partilularly interested in the code you would write in place 
of the /*can dispose here too...*/ comment.


throw Exception with custom message in nogc code

2016-06-04 Thread poliklosio via Digitalmars-d-learn
I need to throw some exceptions in my code, but I don't want to 
ever care about the garbage collector.


I have seen some solutions to throwing exceptions in nogc code, 
but only toy examples, like

https://p0nce.github.io/d-idioms/#Throwing-despite-@nogc

The solution sort of works, but doesn't show how to pass a custom 
string to the exception, and the text says "This trick is a dirty 
Proof Of Concept. Just never do it.".

Is there a solution?


Re: Dealing with Autodecode

2016-06-02 Thread poliklosio via Digitalmars-d

On Thursday, 2 June 2016 at 07:21:28 UTC, Joakim wrote:

On Thursday, 2 June 2016 at 06:53:49 UTC, poliklosio wrote:

(...)


It has been noted many times that forum users are a small part 
of the D userbase, likely the ones who are the most interested 
in evolving the language and thus biased towards changes.  As a 
forum user myself, I'm in that group too and agree with Walter 
that D programmers should be guided by Phobos to explicitly 
declare what level of decoding they want, but this poll may not 
be representative of the wider userbase.


We'll likely only find out what they think once we're a couple 
dmd releases into these changes, as Walter found when he 
submitted PRs for file/path code sometime back.


Its not representative but there is going to be at least some 
weak correlation between the forum and proffesional world. We are 
developers after all. Out of 16 proffesional users none selected 
"Please, don't break my code" option, which tells that there is 
some hope that a change wouldn't be that damaging. Of course 
further investigation would be needed to confirm that hypothesis. 
But at least we didn't prove that such investigation is a waste 
of time.


Also, on the issue of wanting/not wanting autodecoding as a 
feature (ignoring the code breakage issue) 0 out of 55 people 
actually want autodecoding. I think its improbable that most 
users outside the forum would have the opposite view. You would 
have at least some of this refrected in the poll.


So the poll does tell something, you just have to know how not to 
overinterpret the results. :)





Re: Dealing with Autodecode

2016-06-02 Thread poliklosio via Digitalmars-d

On Thursday, 2 June 2016 at 00:14:30 UTC, Seb wrote:

Just FYI after a short period of ten hours we got the following 
45 responses:


Yes, with fire! (hobby user)
77% (35)
Yeah remove that special behavior (professional user)
35% (16)
Wait that is what auto decoding is? wah ugh...
8%  (4)
I don't always decode codeunits, but when I do I use byDChar 
already  6%  (3)


You failed to mention that there were additional answers:

Auto-decoding is great!
0% (0)
No, please don't break my code.
0% (0)

I think those zeroes are actually the most important part of the 
results. :)


Re: Dealing with Autodecode

2016-06-01 Thread poliklosio via Digitalmars-d

On Wednesday, 1 June 2016 at 05:46:29 UTC, Kirill Kryukov wrote:

On Wednesday, 1 June 2016 at 01:36:43 UTC, Adam D. Ruppe wrote:
D USERS **WANT** BREAKING CHANGES THAT INCREASE OVERALL CODE 
QUALITY WITH A SIMPLE MIGRATION PATH


This.
(...)
I don't want to become an expert in avoiding language pitfalls 
(The reason I abandoned C++ years ago).


+1
If you have too many pitfalls in the language, its not easier to 
learn than C++, just different (regardless of the maximum 
productivity you have when using the language, that's another 
issue).
The worst case is you just want to use ASCII text and suddenly 
you have to spend weeks reading a ton of confusing stuff about 
Unicode, D and autodecoding, just to know how to use char[] 
correctly in D.
Compare that to how trivial it is to process ASCII text in, say, 
C++.
And processing just plain ASCII is a very common case, e.g. 
processing textual logs from tools.


Improvement of error messages for failed overloads by attaching custom strings

2016-06-01 Thread poliklosio via Digitalmars-d
I just have an idea which some of you may find good. I described 
my idea on DMD issue tracker but noone responded, so I'm posting 
here also.

Link to issue:
https://issues.dlang.org/show_bug.cgi?id=16059

tldr:
Compiler error messages for failed overloads are not helpful 
enough.
Some useful information that the messages **should** have cannot 
be trivially generated from the code.
Extend the language so that a library author can write something 
like this:


pragma(on_overload_resolution_error, "std.conv.toImpl", "error message>")


Examples of "" that library author may 
choose to write:
- "The most common reason for \"toImpl\" overload resolution 
error is using the \"to\" function incorrectly."
- "The \"to\" function is intended to be used for types summary of acceptable types in English language>."

- "Some of the common fixes are: ."

Does any of this make sense?


Re: Button: A fast, correct, and elegantly simple build system.

2016-05-30 Thread poliklosio via Digitalmars-d-announce

On Monday, 30 May 2016 at 19:16:50 UTC, Jason White wrote:
I am pleased to finally announce the build system I've been 
slowly working on for over a year in my spare time:


Docs:   http://jasonwhite.github.io/button/
Source: https://github.com/jasonwhite/button



Great news! Love to see innovation in this area.


- Lua is the primary build description language.


Why not D?



Re: Introducing mach.d, the github repo where I put whatever modules I happen to write

2016-05-26 Thread poliklosio via Digitalmars-d-announce

On Wednesday, 25 May 2016 at 22:48:53 UTC, pineapple wrote:

On Wednesday, 25 May 2016 at 22:29:38 UTC, pineapple wrote:

I will do that


...I'm honestly having second thoughts because reading the 
style guide for phobos was like a watching a B horror movie.


All the code in the mach.d repo is very permissively licensed 
and anyone with the patience to write code that looks like this 
is absolutely free to morph my pretty code into ugly phobos 
code if they like, provided the license is adhered to (which 
can be pretty effectively summed up as "please just give credit 
where it's due")


If you are up to maintaining your lib for your own purposes AND 
phobos module, you can run your code through dfmt to adjust to 
the phobos style guide.


On the other hand, from what I observed about programming 
language communities, adding a standard library module seems like 
a ton of work, including:
- Long philosophical discussions about which specific functions 
should or shouldn't be in the standard library, even when their 
usefulness for module users is obvious.
- Long discussions about merging with or reusing other existing 
modules, which result in your solution not being clean anymore, 
which in turn triggers more discussions about what should be 
excluded from the module.
- Adding unittests and type restrictions, static ifs and 
optimizations for every conceivable use case.


I would't bother adding something to standard library unless I 
was an expert in the field who was likely to get things right the 
first time. I would definitely not bother if a module is just 
means to achieve another goal.


It might be better to observe how your code evolves over time and 
then select one or two specific pieces which are definitely 
useful, clean and correct.


Re: D plugin for Visual Studio Code

2016-05-22 Thread poliklosio via Digitalmars-d

On Sunday, 22 May 2016 at 21:33:49 UTC, WebFreak001 wrote:

On Sunday, 22 May 2016 at 17:49:08 UTC, poliklosio wrote:
Oh, I see. I didn't realize you don't have a Windows machine 
available. I'll try to build the newest version when I have 
the time.


Its pretty unfortunate that it doesn't work because VSCode is 
the only editor that has all ticks on this page

https://wiki.dlang.org/Editors
so people new to D are more likely to try VSCode first (like 
me), only wasting time if they are on Windows.


windows hates me too much, these permission issues don't make 
any sense. Why wouldn't dub be able to write the lib file to 
the project directory? Fixing workspace-d-installer is just as 
important as fixing workspace-d for windows. Also the laptop is 
so super slow, I think my Windows VM would be faster. Gonna try 
and fix the issues on there in the next few days.


Maybe you are trying to write to the Program Files folder which 
became unwritable without admin priviledges since approximately 
Windows 7?


Anyway, good luck! I hope you don't give up.

For those who wonder what works on Windows, Eclipse + DDT works 
great for me (specifically Windows 7).


Re: D plugin for Visual Studio Code

2016-05-22 Thread poliklosio via Digitalmars-d

On Sunday, 22 May 2016 at 17:04:01 UTC, WebFreak001 wrote:

On Sunday, 22 May 2016 at 15:35:23 UTC, poliklosio wrote:
The code-d plugin doesn't work on Windows for a very long time 
(months). There is even an issue on github

https://github.com/Pure-D/code-d/issues/38
Do you have any plans of fixing it or is Windows low priority?


It would be nice to fix it but I have no way of testing if it 
actually worked. Everything works here on linux, even if I 
change dcd to use TCP instead of unix domain sockets. I made 
some minor changes to catch some errors, can you recompile 
workspace-d and update code-d and try if it works now?


Oh, I see. I didn't realize you don't have a Windows machine 
available. I'll try to build the newest version when I have the 
time.


Its pretty unfortunate that it doesn't work because VSCode is the 
only editor that has all ticks on this page

https://wiki.dlang.org/Editors
so people new to D are more likely to try VSCode first (like me), 
only wasting time if they are on Windows.


Re: D plugin for Visual Studio Code

2016-05-22 Thread poliklosio via Digitalmars-d

On Sunday, 22 May 2016 at 12:47:36 UTC, WebFreak001 wrote:

On Sunday, 22 May 2016 at 12:42:51 UTC, nazriel wrote:
Bad in the sense that you are required to actually do the 
searching ;)


And it breaks the convention used by other language plugins.

So as you can see by the presence of this topic, plugin (which 
is really top notch btw) is easily overlooked


When I made the plugin there was no convention because there 
were only some syntax highlighting packages and maybe 4 or 5 
actual plugins for anything more than syntax highlighting.


Any idea for a better plugin name? I can easily rename it in 
the marketplace and it will still be installable with `code-d`


The code-d plugin doesn't work on Windows for a very long time 
(months). There is even an issue on github

https://github.com/Pure-D/code-d/issues/38
Do you have any plans of fixing it or is Windows low priority?


Re: DMD producing huge binaries

2016-05-21 Thread poliklosio via Digitalmars-d

On Saturday, 21 May 2016 at 18:25:46 UTC, Walter Bright wrote:

On 5/20/2016 11:18 PM, poliklosio wrote:
I have an Idea of reproducible, less-than-exponential time 
mangling, although I

don't know how actionable.

Leave mangling as is, but pretend that you are mangling 
something different, for

example when the input is
foo!(boo!(bar!(baz!(int))), bar!(baz!(int)), baz!(int))
pretend that you are mangling
foo!(boo!(bar!(baz!(int))), #1, #2)
Where #1 and #2 are special symbols that refer to stuff that 
was **already in

the name**, particularly:
#1: bar!(baz!(int))
#2: baz!(int)


This is what LZW compression does, except that LZW does it in 
general rather than just for identifiers.


That's true, but a general compression algorithm requires a 
stream of chars as input, so to compress something of exponential 
size you still need exponential runtime in order to iterate 
(while compressing) on the exponential input.
The idea was to avoid such exponential iteration in the first 
place by doing some sort of caching.
My thinking is that if you only reduce size but keep exponential 
runtime, you are going to have to revisit the issue in a few 
months anyway when people start using things you enabled more 
heavily.

Anyway I wish you good luck with all this!


Re: DMD producing huge binaries

2016-05-21 Thread poliklosio via Digitalmars-d

On Saturday, 21 May 2016 at 08:57:57 UTC, Johan Engelen wrote:

On Saturday, 21 May 2016 at 06:18:05 UTC, poliklosio wrote:


I have an Idea of reproducible, less-than-exponential time 
mangling, although I don't know how actionable.


Leave mangling as is, but pretend that you are mangling 
something different, for example when the input is

foo!(boo!(bar!(baz!(int))), bar!(baz!(int)), baz!(int))
pretend that you are mangling
foo!(boo!(bar!(baz!(int))), #1, #2)
Where #1 and #2 are special symbols that refer to stuff that 
was **already in the name**, particularly:

#1: bar!(baz!(int))
#2: baz!(int)


See 
http://forum.dlang.org/post/szodxrizfmufqdkpd...@forum.dlang.org


So if I understand correctly, you tried implementing something 
like this, but it didn't help much even with size of mangled 
names. Are you sure you were testing on the pathological case 
(exponential stuff), rather than a bad one? Assuming your 
experiment is correct, something interesting is happening, and I 
will be observing you guys finding out what it is. :)


Re: DMD producing huge binaries

2016-05-21 Thread poliklosio via Digitalmars-d
On Thursday, 19 May 2016 at 13:45:18 UTC, Andrei Alexandrescu 
wrote:

On 05/19/2016 08:38 AM, Steven Schveighoffer wrote:

Yep. chain uses voldemort type, map does not.


We definitely need to fix Voldemort types. Walter and I bounced 
a few ideas during DConf.


1. Do the cleartext/compress/hash troika everywhere (currently 
we do it on Windows). That should help in several places.


2. For Voldemort types, simply generate a GUID-style random 
string of e.g. 64 bytes. Currently the name of the type is 
composed out of its full scope, with some 
exponentially-increasing repetition of substrings. That would 
compress well, but it's just a lot of work to produce and then 
compress the name. A random string is cheap to produce and 
adequate for Voldemort types (you don't care for their name 
anyway... Voldemort... get it?).


I very much advocate slapping a 64-long random string for all 
Voldermort returns and calling it a day. I bet Liran's code 
will get a lot quicker to build and smaller to boot.



Andrei


I have an Idea of reproducible, less-than-exponential time 
mangling, although I don't know how actionable.


Leave mangling as is, but pretend that you are mangling something 
different, for example when the input is

foo!(boo!(bar!(baz!(int))), bar!(baz!(int)), baz!(int))
pretend that you are mangling
foo!(boo!(bar!(baz!(int))), #1, #2)
Where #1 and #2 are special symbols that refer to stuff that was 
**already in the name**, particularly:

#1: bar!(baz!(int))
#2: baz!(int)

Now, this is an reduction of the exponential computation time 
only if you can detect repetitions before you go inside them, for 
example, in case of #1, detect the existence of bar!(baz!(int)) 
without looking inside it and seeing baz!(int). Do you think it 
could be done?


You would also need a deterministic way to assign those symbols 
#1, #2, #3... to the stuff in the name.


Compression can also be done at the end if necessary to reduce 
the polynomial growth.


Re: DMD producing huge binaries

2016-05-20 Thread poliklosio via Digitalmars-d

On Friday, 20 May 2016 at 05:34:15 UTC, default0 wrote:

On Thursday, 19 May 2016 at 22:16:03 UTC, Walter Bright wrote:
> (...)
(...) you still avoid generating a 5 Exabyte large symbol name 
just to compress/hash/whatever it.


This is why compression is not good enough.


Re: DMD producing huge binaries

2016-05-19 Thread poliklosio via Digitalmars-d

On Thursday, 19 May 2016 at 23:56:46 UTC, poliklosio wrote:

(...)
Clearly the reason for building such a gigantic string was some 
sort of repetition. Detect the repetition on-the-fly to avoid 
processing all of it. This way the generated name is already 
compressed.


It seems like dynamic programming can be useful.


Re: DMD producing huge binaries

2016-05-19 Thread poliklosio via Digitalmars-d

On Thursday, 19 May 2016 at 22:46:02 UTC, Adam D. Ruppe wrote:

On Thursday, 19 May 2016 at 22:16:03 UTC, Walter Bright wrote:
Using 64 character random strings will make symbolic debugging 
unpleasant.


Using 6.4 megabyte strings already makes symbolic debugging 
unpleasant.


The one thing that worries me about random strings is that it 
needs to be the same across all builds, or you'll get random 
linking errors when doing package-at-a-time or whatever (dmd 
already has some problems like this!). But building a gigantic 
string then compressing or hashing it still sucks... what we 
need is a O(1) solution that is still unique and repeatable.


Clearly the reason for building such a gigantic string was some 
sort of repetition. Detect the repetition on-the-fly to avoid 
processing all of it. This way the generated name is already 
compressed.


Re: [OT] Re: Github names & avatars

2016-05-16 Thread poliklosio via Digitalmars-d

On Saturday, 14 May 2016 at 19:19:08 UTC, Walter Bright wrote:

On 5/13/2016 10:08 PM, Andrei Alexandrescu wrote:

Your name is your brand.


Probably the most obvious example of this is Trump. Long before 
he got into politics, he understood his name was his brand, and 
never lost an opportunity promote his brand (and profit off of 
it).


In fact, I stopped using pseudonyms online for my professional 
work because I watched "The Apprentice" and realized that he 
had the right idea.


Also, think of the top programmers in the business. They all do 
things online under their own names.


If I would guess, I would say that using your own name instead of 
a fake one is OK. However, I would be cautious.


Please, mind the survivor bias! Would someone post here if he 
lost his engineering career because of a comment on the internet? 
Of course not!


How many of those unlucky guys are there per one successful one? 
10, 1, 0.01, 0.01? You can't know until you actually do hard 
work collecting actual statistical data.


For everything anyone writes there are pretty much as many 
interpretations as the readers.
And vague claims about touchy subjects are million times worse. 
If someone has a bad day, this really can turn out badly. Culture 
differences make it even worse. What is a technical disagreement 
in one country can be racism in another. Given sufficiently many 
words, those things can happen. A joke can be easily 
misunderstood as something completely different than intended. 
Also, social media sometimes spreads the news about people's 
textual mistakes. Just google "careers destroyed by social media".
Also, you assume that you are going to be judged by technical 
merit alone, which is fair enough if you are white male with good 
reputation, living in a country known for its love for freedom of 
speech. It may be very different if you are a poor immigrant girl 
in a third world country trying to convince a prospective 
employer to give you the first chance at trying to do some 
programming.


Another bias: Everyone always wants to think they are always 
victims and never the perpetrators, but still somehow 
perpetrators exist. After all, I would never say anything hurtful 
to anyone ever, right? And those people who accidentally hit 
pedestrians with cars are always pure evil, and I could never be 
one of them, right? Noone could ever be one of those until he is. 
:)


There's a reason why stuff like correspondence is traditionally 
private.


Re: Always false float comparisons

2016-05-15 Thread poliklosio via Digitalmars-d
Can you provide an example of a legitimate algorithm that 
produces degraded results if the precision is increased?


The real problem here is the butterfly effect (the chaos theory 
thing). Imagine programming a multiplayer game. Ideally you only 
need to synchronize user events, like key presses etc. Other 
computation can be duplicated on all machines participating in a 
session. Now imagine that some logic other than display (e.g. 
player-bullet collision detection) is using floating point. If 
those computations are not reproducible, a higher precision on 
one player's machine can lead to huge inconsistencies in game 
states between the machines (e.g. my character is dead on your 
machine but alive on mine)!
If the game developer cannot achieve reproducibility or it takes  
too much work, the workarounds can be wery costly. He can, for 
example, convert implementation to soft float or increase amount 
of synchronization over the network.


Also I think Adam is making a very good point about generl 
reproducibility here. If a researcher gets a little bit different 
results, he has to investigate why, because he needs to rule out 
all the serious mistakes that could be the cause of the 
difference. If he finds out that the source was an innocuous 
refactoring of some D code, he will be rightly frustrated that D 
has caused so much unnecessary churn.


I think the same problem can occur in mission-critical software 
which undergoes strict certification.


Re: The Case Against Autodecode

2016-05-13 Thread poliklosio via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:

On Thursday, 12 May 2016 at 20:15:45 UTC, Walter Bright wrote:

(...)
Wow, that's eleven things wrong with just one tiny element of 
D, with the potential to cause problems, whether fixed or not.  
And I get called a troll and other names when I list half a 
dozen things wrong with D, my posts get removed/censored, etc, 
all because I try to inform people not to waste time with D 
because it's a broken and failed language.


*sigh*

Phobos, a piece of useless rock orbiting a dead planet ... the 
irony.


You get banned because there is a difference between torpedoing a 
project and having constructive criticism.
Also, you are missing the point by claiming that a technical 
problem is sure to kill D. Note that very successful languages 
like C++, python and so on also have undergone heated discussions 
about various features, and often live design mistakes for many 
years. The real reason why languages are successful is what they 
enable, not how many quirks they have.

Quirks are why they get replaced by others 20 years later. :)


Re: Researcher question – what's the point of semicolons and curly braces?

2016-05-03 Thread poliklosio via Digitalmars-d

On Tuesday, 3 May 2016 at 18:37:54 UTC, Chris wrote:
If a woman doesn't want to program, she just doesn't want to, 
even if it's in Python. It's the term "programming" that makes 
them (i.e. those who are not interested) run away. "Write a 
script" sounds nicer, but even then ... if they don't have to 
they won't even touch it with thongs.


I'm bewildered by the use of the word "thongs" here. I'm just 
really surprised to see it here. Is it some kind of a spelling 
mistake? I'm starting to thing that my imperfect non-native 
English is the issue.


Re: String lambdas

2016-05-01 Thread poliklosio via Digitalmars-d
Whether the compiler allows explicit string lambdas in user 
code in another issue.


s/in another issue/is another issue/


Re: String lambdas

2016-05-01 Thread poliklosio via Digitalmars-d
On Tuesday, 26 April 2016 at 17:58:22 UTC, Andrei Alexandrescu 
wrote:

https://github.com/dlang/phobos/pull/3882

I just closed with some regret a nice piece of engineering. 
Please comment if you think string lambdas have a lot of 
unexploited potential.


One thing we really need in order to 100% replace string 
lambdas with lambdas is function equivalence. Right now we're 
in the odd situation that SomeTemplate!((a, b) => a < b) has 
distinct types, one per instantiation.



Andrei


I will just point out an obvious solution to the problem of 
having distinct types: if a normal lambda is sufficiently simple, 
and is used as a template parameter, lower the code to the one 
which uses an equivalent string lambda.


The string needs to somewhat normalized so that (a, b) => a < b  
and  (c,d) => c

Whether the compiler allows explicit string lambdas in user code 
in another issue.


Re: DConf 2016 offical presentation template

2016-04-22 Thread poliklosio via Digitalmars-d

On Wednesday, 20 April 2016 at 07:53:53 UTC, Benjamin Thaut wrote:
Is there a official presentation template for Dconf 2016? If 
not it would be greate if someone could create one. Many 
programmers (me included) are not good with picking colors and 
thus presentations usually don't look as good as they could.


Kind Regards
Benjamin Thaut


As a guy who is going to watch the recordings on-line, I would 
like to remind you not to rely on laser pointers too much.

Its never visible on the recordings!
I think its best (mileage may vary) to have little enough on a 
slide so that you can comfortably describe what to look at, with 
words.


Re: OpenCV bindings for D

2016-03-21 Thread poliklosio via Digitalmars-d

On Monday, 21 March 2016 at 19:16:16 UTC, wobbles wrote:
On Monday, 21 March 2016 at 16:01:59 UTC, Guillaume Piolat 
wrote:

On Monday, 21 March 2016 at 15:45:36 UTC, wobbles wrote:

Hi Folks,

I have a project in mind that I'd like to run on my new 
Raspberry Pi 3.
Essentially a security camera that will only record when it 
detects changes between frames.
Now, this is already a solved problem I believe, however in 
the interest of learning I want to do it myself.


Ideally, I'd compare two images using the OpenCV library, and 
ideally I'd do that from D code.


However, there exists no OpenCV binding for D that I can 
find. Is there a technical reason for this?


I understand the OpenCV C api is quite old and dated, and 
it's recommended to use the C++ one as a result.

On that, where is C++ / D linkage at?

I know very little about linking the two, but it's something 
I'd like to learn more about and see this as an opportunity 
for that - before I sink a load of time into it, however, 
it'd be good to know if it's even feasibly possible to do so.


Thanks!


It's quite easy to write bindings for libraries that have a C 
interface (ie most), if only a bit boring.


That's the thing, it doesn't have a C interface (or more 
correctly, it's modern versions don't have a C interface as it 
has been deprecated since I think version 2.4. OpenCV is at 3.4 
now).


I was wondering if there was any difficulties interfacing D to 
the C++ API?


I don't have much experience with D, but I have with OpenCV. The 
key class in OpenCV is cv::Mat, which is used to represent images 
as arguments and return values of almost every OpenCV algorithm. 
It is kind of a handle class - it optionally owns image data and 
manages its lifetime through reference counting and RAII. The 
sheer number of constructors is a bit overwhelming.

See
http://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#mat-mat

I would be interested to see what D experts would say about 
interfacing such class.
My guess would be that you may have trouble exposing this 
directly to the D side in a useful way. You may have to write 
some helpers or even a wrapper on the C++ side.