Three people out of four dislike SDL

2015-11-30 Thread retard via Digitalmars-d
Just voted at 
http://www.easypolls.net/poll.html?p=565587f4e4b0b3955a59fb67 - 
140 votes, 75% are against SDL. That should count for something? 
Sonke?


Re: "Code Sandwiches"

2011-03-10 Thread retard
Thu, 10 Mar 2011 01:18:53 -0500, Nick Sabalausky wrote:

> "Jonathan M Davis"  wrote in message
> news:mailman.2409.1299728378.4748.digitalmar...@puremagic.com...
>> On Wednesday 09 March 2011 13:30:27 Nick Sabalausky wrote:
>>> But why is it that academic authors have a chronic inability to
>>> release any
>>> form of text without first cramming it into a goddamn PDF of all
>>> things? This is one example of why I despise Adobe's predominance: PDF
>>> is fucking useless for anything but printing, and no one seems to know
>>> it. Isn't it about time the ivory tower learned about Mosaic? The web
>>> is more than a PDF-distribution tool...Really! It is! Welcome to the
>>> mid-90's. Sheesh.
>>
>> And what format would you _want_ it in? PDF is _way_ better than having
>> a file
>> for any particular word processor. What else would you pick? HTML?
>> Yuck. How
>> would _that_ be any better than a PDF? These are _papers_ after all,
>> not some
>> web article. They're either written up in a word processor or with
>> latex. Distributing them as PDFs makes perfect sense.
> 
> They're text. With minor formatting. That alone makes html better. Html
> is lousy for a lot of things, but formatted text is the one thing it's
> always been perfectly good at. And frankly I think I'd *rather* go with
> pretty much any word processing format if the only other option was pdf.
> 
> Of course, show me a pdf viewer that's actually worth a damn for viewing
> documents on a PC instead of just printing, and maybe I could be
> persuaded to not mind so much. So far I've used (as far as I can think
> of, I know there's been others), Acrobat Reader (which I don't even
> allow on my computer anymore), the one built into OSX, and FoxIt.
> 
> 
>> And yes, most of these papers are published in print format as their
>> main form
>> of release. You're usually lucky to be able to get a PDF format instead
>> of having to have bought the appropriate magazine or book of papers
>> from a particular conference.
>>
>>
> I'm all too well aware how much academics considers us unwashed masses
> lucky to ever be granted the privilege to so much as glance upon any of
> their pristine excellence.

You clearly have no idea what you're talking about. If you want to 
publish your results in a peer reviewed conference, which is often a 
requirement for further funding if you happen to depend on it, then you 
MUST adopt to their guidelines. This paper is a TR, it probably does not 
even go through a (peer) review process.

Usually you're asked to publish your conference paper using a standard 
document template, but you're not allowed to republish the paper 
elsewhere. Sometimes you're allowed to republish it on your personal site 
if you make some changes. Ever read proceedings where every paper uses 
different formatting? It doesn't look professional.

Now, another point against HTML or DOC or something similar is that you 
can't really tell what it looks like when printed. I tried to manually 
write a paper once, but it just didn't work. Also one of the biggest 
problems is that there's a maximum number of pages allowed. Extra pages 
cost money due to printing costs. With HTML or DOC you can't be sure how 
the system that prints the paper organizes all figures and line wrapping. 
Usually it fails and you get bad quality. You should also take a look at 
this wrt html http://en.wikipedia.org/wiki/Justification_(typesetting)


Re: Function literals and lambda functions

2011-03-06 Thread retard
Sun, 06 Mar 2011 20:24:12 +, Peter Alexander wrote:

> On 6/03/11 2:03 PM, Russel Winder wrote:
>> PS  If you ask why not:
>>
>>  reduce ! ( "a+b" ) ( 0.0 , outputData )
>>
>> I find this somehow unacceptable.  It's the string, its not a function.
>> Fine, my problem, but that still leaves the above.
> 
> You probably know this already, but just in case...
> 
> The string is converted into a function at compile time, so if you were
> scared of the possible performance hit of having to parse the string at
> runtime, then you can rest assured that it is as fast as supplying a
> normal function.
> 
> On the other hand, if you just don't like the appearance of a string as
> a function in source code then, yah, I agree. It does seem a little
> wrong, although you get used to it.

It also generates a bit of redundant code for each template instantiation. 
No solution for this has been proposed afaik. It's a deal breaker in 
embedded programming.


Re: std.path.getName(): Screwy by design?

2011-03-01 Thread retard
Tue, 01 Mar 2011 11:04:53 -0800, Jonathan M Davis wrote:

> On Tuesday, March 01, 2011 06:54:27 Andrei Alexandrescu wrote:
>> On 3/1/11 4:54 AM, Jonathan M Davis wrote:
>> > On Tuesday 01 March 2011 02:49:31 Daniel Gibson wrote:
>> >> Am 01.03.2011 09:58, schrieb Nick Sabalausky:
>> >>> According to the docs, std.path.getName() "Returns the
>> >>> extensionless version of a filename or path."
>> >>> 
>> >>> But the doc also says that if the filename doesn't have a dot, then
>> >>> it returns null (and I've verified that on DMD 2.050). Isn't that a
>> >>> bit ridiculous? Shouldn't it still return the extensionless version
>> >>> even if it doesn't have an extension? Ie, return the original
>> >>> string.
>> >>> 
>> >>> I would expect all of the following to pass, but currently (by
>> >>> design) only the first two pass:
>> >>> 
>> >>> assert(getName(r"file.ext") == r"file");
>> >>> assert(getName(r"/path/file.ext") == r"/path/file");
>> >>> 
>> >>> assert(getName(r"file") == r"file");
>> >>> assert(getName(r"/path/file") == r"/path/file");
>> >>> 
>> >>> The current behavior seems useless.
>> >>> 
>> >>> Additionally, this also seems screwy:
>> >>> 
>> >>> // Currently passes:
>> >>> assert(getName(r"/pa.th/file") == r"/pa");
>> >>> 
>> >>> WTF? The docs seem to suggest that's by design, but I can't imagine
>> >>> why. Even on Windows it's not as if filenames can contain forward
>> >>> slashes (and except for the command-line, accessing paths with
>> >>> forward-slash separators works fine on Windows).
>> >>> 
>> >>> Fortunately, the docs do seem to be wrong about this:
>> >>> 
>> >>> version(Windows)
>> >>> 
>> >>>getName(r"d:\path.two\bar") =>   null
>> >>> 
>> >>> That currently returns r"d:\path.two\bar" as I would expect.
>> >>> 
>> >>> If those in charge agree with me on all of the this, I'd be glad to
>> >>> go through std.path, fix all of that, check for any other issues
>> >>> and submit a modified std.path with updated examples and unittests
>> >>> for approval.
>> >> 
>> >> And what about "foo.tar.gz"? Does it return "foo" or "foo.tar"? And
>> >> what should be returned?
>> > 
>> > I'd definitely argue that everything to the right of the first dot in
>> > the file name is the extension, but I don't know how that's generally
>> > handled by programs or OSes that actually care about extensions.
>> > 
>> > - Jonathan M Davis
>> 
>> If we want to stick with the notion of the extension, it should be the
>> thing after the last dot (if the dot isn't the first character of the
>> name). Thus .bashrc has no extension and foo.tar.gz has extension gz.
>> That facilitates asking questions such as "was this file
>> gz-compressed?"
> 
> Yeah, you're probably right. I definitely think of file.tar.gz as having
> the extension tar.gz, not gz, but it makes far more sense from a
> processing point of view to treat gz as the extension. You can then get
> the extension of the remainder if you want, whereas if you treated
> tar.gz as the extension, that wouldn't work all that well (particularly
> since std.path treats the dot as part of the extension instead of as a
> separator).

In the *nix land most common extensions of this sort 
are .tar.gz, .tar.bz2, .tgz, .ps.gz, .pds.gz, and .svgz. The files are 
gzip or bzip2 packed single files, nothing more. Some tools only manage 
to open them if the extension is correct and otherwise treat them as 
black box archives. For example GNOME's pdf viewer refuses to open 
document.gz, but renaming it to document.ps.gz makes it viewable, 
assuming the file is a gzipped postscript document.


Re: std.path.getName(): Screwy by design?

2011-03-01 Thread retard
Tue, 01 Mar 2011 19:25:57 +, retard wrote:

> .pds.gz,

Sorry about the typo, .pdf.gz 


Re: What are tuples exactly? (D's tuples considered harmful)

2011-02-25 Thread retard
Fri, 25 Feb 2011 12:18:11 -0800, Jonathan M Davis wrote:

> On Friday, February 25, 2011 03:39:31 Morlan wrote:
>> While trying to understand the expand mechanism presented in the TDPL
>> book I tried to read std.typetuple and std.typecons files. I found a
>> true nightmare in those files in the form of an almost infinite chain
>> of aliases and macro processing. How can one understand a written text
>> if the words are redifined in every other line? And people say that
>> goto instruction is bad? Please give me a break.
>> 
>> Anyway, at some point I realized that I cannot understand what is going
>> on because there is some language mechanism in action which I do not
>> know. I wrote a small program to confirm this. Here it is:
>> 
>> struct S { TypeTuple!(int, double) field; } void main(){
>>  S mys;
>>  mys.field[0] = 4;
>>  mys.field[1] = 4.4;
>> }
>> 
>> It compiles all right. But if you replace the S's definition with {int,
>> double field;}
>> it does not compile. So tuples are clearly much more than a sequence of
>> types and they trigger a completely different semantic action than a
>> plain sequence of types. Is there a precise definition of tuples
>> somewhere?
> 
> If all you want is a normal tuple, use Tuple from std.typecons. It even
> has the convenience function tuple for creating them. TypeTuple is a
> different beast entirely, and you probably don't want that unless you
> specifically need it. It has to do with processing types and are _not_
> generally what you want. If what you're looking for is tuples, then use
> std.typecons.Tuple.

I've used:

template Tuple(T...) { alias T Tuple; }

Nicely built-in and works in most cases.


Re: O(N) Garbage collection?

2011-02-19 Thread retard
Sat, 19 Feb 2011 22:15:52 -0500, Nick Sabalausky wrote:

> "retard"  wrote in message
> news:ijp7pa$1d34$1...@digitalmars.com...
>> Sat, 19 Feb 2011 14:32:27 -0500, dsimcha wrote:
>>
>>> On 2/19/2011 12:50 PM, Ulrik Mikaelsson wrote:
>>>> Just a thought; I guess the references to the non-GC-scanned strings
>>>> are held in GC-scanned memory, right? Are the number of such
>>>> references also increased linearly?
>>>
>>> Well, first of all, the benchmark I posted seems to indicate
>>> otherwise.
>>>   Second of all, I was running this program before on yeast DNA and it
>>> was ridiculously fast.  Then I tried to do the same thing on human DNA
>>> and it became slow as molasses.  Roughly speaking, w/o getting into
>>> the biology much, I've got one string for each gene.  Yeast have about
>>> 1/3 as many genes as humans, but the genes are on average about 100
>>> times smaller.  Therefore, the difference should be at most a small
>>> constant factor and in actuality it's a huge constant factor.
>>>
>>> Note:  I know I could make the program in question a lot more space
>>> efficient, and that's what I ended up doing.  It works now.  It's just
>>> that it was originally written for yeast, where space efficiency is
>>> obviously not a concern, and I would have liked to just try a one-off
>>> calculation on the human genome without having to rewrite portions of
>>> it.
>>
>> Probably one reason for this behavior is the lack of testing. My
>> desktop only has 24 GB of DDR3. I have another machine with 16 GB of
>> DDR2, but don't know how to combine the address spaces via clustering.
>> This would also horribly drag down GC performance. Even JVM is badly
>> tuned for larger systems, they might use the Azul Java runtimes
>> instead..
> 
> *Only* 24GB of DDR3, huh? :)
> 
> Makes me feel like a pauper: I recently upgraded from 1GB to 2GB of DDR1
> ;) (It actually had been 2GB a few years ago, but I cannablized half of
> it to build my Linux box.)
> 
> Out of curiosity, what are you running on that? (Multiple instances of
> Crysis? High-definition voxels?)

The largest processes are virtual machines, application servers, web 
server, IDE environment, several compiler instances in parallel. The Web 
browser also seems to have use for a gigabyte or two these days. As I 
recall, the memory was cheaper than now when I bought it. It's also 
cheaper than DDR2 or DDR (per gigabyte).


Re: O(N) Garbage collection?

2011-02-19 Thread retard
Sat, 19 Feb 2011 14:32:27 -0500, dsimcha wrote:

> On 2/19/2011 12:50 PM, Ulrik Mikaelsson wrote:
>> Just a thought; I guess the references to the non-GC-scanned strings
>> are held in GC-scanned memory, right? Are the number of such references
>> also increased linearly?
> 
> Well, first of all, the benchmark I posted seems to indicate otherwise.
>   Second of all, I was running this program before on yeast DNA and it
> was ridiculously fast.  Then I tried to do the same thing on human DNA
> and it became slow as molasses.  Roughly speaking, w/o getting into the
> biology much, I've got one string for each gene.  Yeast have about 1/3
> as many genes as humans, but the genes are on average about 100 times
> smaller.  Therefore, the difference should be at most a small constant
> factor and in actuality it's a huge constant factor.
> 
> Note:  I know I could make the program in question a lot more space
> efficient, and that's what I ended up doing.  It works now.  It's just
> that it was originally written for yeast, where space efficiency is
> obviously not a concern, and I would have liked to just try a one-off
> calculation on the human genome without having to rewrite portions of
> it.

Probably one reason for this behavior is the lack of testing. My desktop 
only has 24 GB of DDR3. I have another machine with 16 GB of DDR2, but 
don't know how to combine the address spaces via clustering. This would 
also horribly drag down GC performance. Even JVM is badly tuned for 
larger systems, they might use the Azul Java runtimes instead..


Re: D vs Go on reddit

2011-02-16 Thread retard
Thu, 10 Feb 2011 22:38:03 +0100, Ulrik Mikaelsson wrote:

> 2011/2/10 Bruno Medeiros :
>> I'm very much a fan of simple and orthogonal languages. But this
>> statement has a big problem: it's not clear what one actually considers
>> to be "simple" and "orthogonal". What people consider to be orthogonal
>> can vary not only a little, but actually a lot. Sometimes it can
>> actually vary so much as to be on opposite sides. I remember seeing
>> that first hand here on D: two people were arguing for opposing things
>> in D (I don't remember the particular issue, but one was probably a
>> greater language change, the other as for the status quo, or a minor
>> change from the status quo), and explicitly argued that their
>> alternative was more orthogonal! I remember thinking that one was
>> stretching the notion of orthogonality a bit further than the other,
>> but I didn't find any of them to actually be incorrect.
> 
> For the sake of discussion I'll define orthogonal as "non-redundant".
> For instance, orthogonal dimensions in a coordinate-system is when the
> dimension is completely unrelated to other dimensions, i.e. there is no
> redundancy in the coordinate-system. Likewise orthogonality in a
> language in my definition means it does not have redundancy in features.
> 
> Now, the problem with orthogonality is that, it is not good for
> exploiting 80/20 optimisations.
> 
> Example: for most (imperative) languages, you'll somewhere have the
> general way of iteration;
> 
> list x;
> int i = 0;
> while (i < x.length) {
>   // do something with x[i];
>   i++;
> }
> 
> Now, if the language is truly orthogonal, you cannot add the "foreach (x
> in list)"-feature, since it's a redundant way of doing a subset of the
> same things. Yet, it's highly likely practical thinking says that for
> most programs in the language, 95% of all iteration will be
> list-iteration, where the foreach-version is both shorter to write,
> easier to read, and not as prone to a typo.

There is no need to guess what orthogonality means in computer science, 
the definition is here http://en.wikipedia.org/wiki/
Orthogonality#Computer_science

For example if the language has both the if() statement and ? : 
expression, you could rename ? : into if() else and overload the keywords 
to mean both expression and a non-returning statement. Now it only has 
one construct, which is more powerful than than any of the previous two.


Re: What Makes A Programming Language Good

2011-02-16 Thread retard
Wed, 16 Feb 2011 17:23:04 +, Bruno Medeiros wrote:

> On 04/02/2011 20:55, bearophile wrote:
>> Bruno Medeiros:
>>
>>> That language ecosystems are what matter, not just the language
>>> itself.
>>
>> This is true, but only once your language is already very good :-)
>>
>> Bye,
>> bearophile
> 
> I disagree. I think an average language with an average toolchain (I'm
> not even considering the whole ecosystem here, just the toolchain -
> compilers, debuggers, IDEs, profilers, and some other tools) will be
> better than a good language with a mediocre toolchain. By better I mean
> that people will be more willing to use it, and better programs will be
> created. Obviously it is very hard to quantify in a non-subjective way
> what exactly good/average/mediocre is in terms of a language and
> toolchain. But roughly speaking, I think the above to be true.
> 
> The only advantage that a good language with bad toolchain has over
> another ecosystem, is in terms of *potential*: it might be easier to
> improve the toolchain than to improve the language. This might be
> relevant if one is still an early-adopter or hobbyist, but if you want
> to do a real, important non-trivial project, what you care is what is
> the state of the toolchain and ecosystem *now*.

Surprisingly this is exactly what I've been saying several times.

I'd also like to point out that part of the potential for new languages 
comes from the fact that you can design much cleaner standard & de facto 
libs before it takes off. Some of the issues with "old" languages come 
from the standard utilities and libraries. It sometimes takes an enormous 
effort to replace those. So, 100% of the potential doesn't come from 
redesign of the language, it's also the redesign of tools and the 
ecosystem. I'm also quite sure it's a redesign every time now. There are 
simply too many languages already to choose from.

Some examples of failed designs which are still in use: PHP's stdlib with 
weird parameter conventions and intensive use of globals, (GNU) C/C++ 
build tools, Java's wasteful (in terms of heap allocation) stdlib, C++'s 
thread/multicore unaware runtime, C++'s metaprogramming libraries using 
the terrible template model, Javascript's "bad" parts from the era when 
it still was a joke.

However there has been a constant flux of new languages since the 1950s. 
I'm sure many new languages can beat Java and C++ in several ways. But in 
general a new language isn't some kind of a silver bullet. Advancements 
in language design follow the law of diminishing returns -- even though 
we see complex breakthroughs in type system design, better syntax and 
cleaner APIs, something around 5-50% better usability/productivity/safety 
many times isn't worth the effort. I've seen numbers that moving from 
procedural programming to OOP only improved the productivity about 
20-40%. Moving from OOP language 1 to OOP language 2 quite likely 
improves the numbers a lot less.

As an example, Java's toolchain and its set of available libraries are so 
huge that you need millions of $$$ and thousands of man years to beat it 
in many domains. There simply isn't any valid technical reason not to use 
that tool (assuming it's the tool people typically use to get the work 
done). If you need a low cost web site and only php hosting is available 
at that price, you can't do a shit with D. Some hardcore fanboy would 
perhaps build a PHP backend for D, but it doesn't make any sense. It's 
1000 lines of PHP vs 10 lines of D. And reclaiming the potential 
takes forever. It's not worth it.


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 13:00:00 -0800, Walter Bright wrote:

> In particular, instruction scheduling no longer seems to matter, except
> for the Intel Atom, which benefits very much from Pentium style
> instruction scheduling. Ironically, dmc++ is the only available current
> compiler which supports that.

I can't see how dmc++ is the only available current compiler which 
supports that. For example this article (April 15, 2010) [1] tells:

"The GCC 4.5 announcement was made at GNU.org. Changes from GCC 4.4, 
which was released almost one year ago, include the
 * use of the MPC library to evaluate complex arithmetic at compile time
 * C++0x improvements
 * automatic parallelization as part of Graphite
 * support for new ARM processors
 * Intel Atom optimizations and tuning support, and
 * AMD Orochi optimizations too"

GCC has supported i586 scheduling as long as I can remember.

[1] http://www.phoronix.com/scan.php?page=news_item&px=ODE1Ng

>  > or whole program
> 
> I looked into that, there's not a lot of oil in that well.

How about [2]:

"LTO is quite promising.  Actually it is in line or even better with
improvement got from other compilers (pathscale is the most convenient
compiler to check lto separately: lto gave there upto 5% improvement
on SPECFP2000 and 3.5% for SPECInt2000 making compiler about 50%
slower and generated code size upto 30% bigger).  LTO in GCC actually
results in significant code reduction which is quite different from
pathscale.  That is one of rare cases on my mind when a specific
optimization works actually better in gcc than in other optimizing
compilers."

[2] http://gcc.gnu.org/ml/gcc/2009-10/msg00155.html

In my opinion the up to 5% improvement is pretty good compared to 
advances in typical minor compiler version upgades. For example [3]:

"The Fortran-written NAS Parallel Benchmarks from NASA with the LU.A test 
is running significantly faster with GCC 4.5. This new compiler is 
causing NAS LU.A to run 15% better than the other tested GCC releases."

[3] http://www.phoronix.com/scan.php?
page=article&item=gcc_45_benchmarks&num=6

>  > and instruction level optimizations the very latest GCC and LLVM are
>  > now
> slowly adopting.
> 
> Huh? Every compiler in existence has done, and always has done,
> instruction level optimizations.

I don't know this area well enough, but here is a list of optimizations 
it does http://llvm.org/docs/Passes.html - from what I've read, GNU GCC 
doesn't implement all of these.

> Note: a lot of modern compilers expend tremendous effort optimizing
> access to global variables (often screwing up multithreaded code in the
> process). I've always viewed this as a crock, since modern programming
> style eschews globals as much as possible.

I only know that modern C/C++ compilers are doing more and more things 
automatically. And that might soon include automatic vectorization + 
multithreading of some computationally intensive code via OpenMP.


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 11:38:50 -0800, Walter Bright wrote:

> Lutger Blijdestijn wrote:
>> retard wrote:
>> 
>>> Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
>>>
>>>>> Unfortunately DMC is always out of the question because the
>>>>> performance is 10-20 (years) behind competition, fast compilation
>>>>> won't help it.
>>>> Can you please give a few links on this?
>>> What kind of proof you need then? Just take some existing piece of
>>> code with high performance requirements and compile it with dmc. You
>>> lose.
>>>
>>> http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
>>> http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
>>> http://lists.boost.org/boost-testing/2005/06/1520.php
>>> http://www.digitalmars.com/d/archives/c++/chat/66.html
>>> http://www.drdobbs.com/cpp/184405450
>>>
>>>
>> That is ridiculous, have you even bothered to read your own links? In
>> some of them dmc wins, others the differences are minimal and for all
>> of them dmc is king in compilation times.
> 
> 
> People tend to see what they want to see. There was a computer magazine
> roundup in the late 1980's where they benchmarked a dozen or so
> compilers. The text enthusiastically declared Borland to be the fastest
> compiler, while their own benchmark tables clearly showed Zortech as
> winning across the board.
> 
> The ironic thing about retard not recommending dmc for fast code is dmc
> is built using dmc, and dmc is *far* faster at compiling than any of the
> others.

Your obsession with fast compile times is incomprehensible. It doesn't 
have any relevance in the projects I'm talking about. On multicore 'make -
jN', distcc & low cost clusters, and incremental compilation already 
mitigate most of the issues. LLVM is also supposed to compile large 
projects faster than the 'legacy' gcc. There are also faster linkers than 
GNU ld. If you're really obsessed with compile times, there are far 
better languages such as D.

The extensive optimizations and fast compile times have an inverse 
correlation. Of course your compiler compiles faster if it optimizes 
less. What's the point here?

All your examples and stories are from 1980's and 1990's. Any idea how 
well dmc fares against latest Intel / Microsoft / GNU compilers? 


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 20:10:47 +0100, Lutger Blijdestijn wrote:

> retard wrote:
> 
>> Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
>> 
>>>> Unfortunately DMC is always out of the question because the
>>>> performance is 10-20 (years) behind competition, fast compilation
>>>> won't help it.
>>> 
>>> Can you please give a few links on this?
>> 
>> What kind of proof you need then? Just take some existing piece of code
>> with high performance requirements and compile it with dmc. You lose.
>> 
>> http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
>> http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
>> http://lists.boost.org/boost-testing/2005/06/1520.php
>> http://www.digitalmars.com/d/archives/c++/chat/66.html
>> http://www.drdobbs.com/cpp/184405450
>> 
>> 
> That is ridiculous, have you even bothered to read your own links? In
> some of them dmc wins, others the differences are minimal and for all of
> them dmc is king in compilation times.

DMC doesn't clearly win in any of the tests and these are merely some 
naive examples I found by doing 5 minutes of googling. Seriously, take a 
closer look - the gcc version is over 5 years old. Nobody even bothers 
doing dmc benchmarks anymore, dmc is so out of the league. I repeat, this 
was about performance of the generated binaries, not compile times.

Like I said: take some existing piece of code with high performance 
requirements and compile it with dmc. You lose. I honestly don't get what 
I need to prove here. Since you have no clue, presumably you aren't even 
using dmc and won't be considering it.

Just take a look at the command line parameters:
-[0|2|3|4|5|6]  8088/286/386/486/Pentium/P6 code

There are no arch specific optimizations for PIII, Pentium 4, Pentium D, 
Core, Core 2, Core i7, Core i7 2600K, and similar kinds of products from 
AMD. No mention of auto-vectorization or whole program and instruction 
level optimizations the very latest GCC and LLVM are now slowly adopting.


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 10:01:53 -0800, Walter Bright wrote:

> retard wrote:
>> Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
>> 
>>>> Unfortunately DMC is always out of the question because the
>>>> performance is 10-20 (years) behind competition, fast compilation
>>>> won't help it.
>>> Can you please give a few links on this?
>> 
>> What kind of proof you need then? Just take some existing piece of code
>> with high performance requirements and compile it with dmc. You lose.
>> 
>> http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
>> http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
> 
> That link shows dmc winning.

No, it doesn't. In the Fib-5 test where the optimizations bring 
largest improvements in wall clock time, g++ 3.3.1, vc++7, bc++ 5.5.1, 
and icc are all faster with optimized settings. This test is a joke 
anyway. I wouldn't pick a compiler for video transcoding based on some 
Fib-1 results, seriously.


Re: std.xml should just go

2011-02-14 Thread retard
Mon, 14 Feb 2011 18:48:53 +0100, spir wrote:

> On 02/14/2011 04:11 PM, Steven Schveighoffer wrote:
>> On Fri, 11 Feb 2011 19:06:48 -0500, Andrei Alexandrescu
>>  wrote:
>>
>>> On 2/11/11 8:31 AM, Bruno Medeiros wrote:
 On 04/02/2011 16:14, Eric Poggel wrote:
> On 2/3/2011 10:20 PM, Andrei Alexandrescu wrote:
>> At this point there is no turning back from ranges, unless we come
>> about with an even better idea (I discussed one with Walter but
>> we're not pursuing it yet).
>
> Care to elaborate on the new idea? Or at least a quick summary so
> we're not all left wondering?

 That comment left me curious as well...
>>>
>>> The discussed idea went as follows.
>>>
>>> Currently we have r.front and r.back for accessing the first and last
>>> element, and r[n] for an arbitrary element.
>>>
>>> Plus, r[n] is extremely flexible (opIndex, opIndexAssign,
>>> opIndexOpAssign... awesome level of control... just perfect). So then
>>> I thought, how about unifying everything?
>>>
>>> Imagine we gave up on r.front and r.back. Poof. They disappeared. Now
>>> we define two entities "first" and last" such that r[first] and
>>> r[last] refer the first and last elements in the range. Now we have
>>> the situ:
>>>
>>> - Input and forward ranges statically allow only r[first]
>>>
>>> - Bidirectional ranges allow r[first] and r[last]
>>>
>>> - Random-access ranges allow r[first], r[last], and r[n] for integrals
>>> n
>>>
>>> Now we have a unified way of referring to elements in ranges. Walter's
>>> excellent follow-up is that the compiler could use lowering such that
>>> you don't even need to use first and last. You'd just use r[0] and r[$
>>> - 1] and the compiler would take care of handling these special cases.
>>
>> er... I don't like this. 0 does not necessarily mean first element. A
>> map has arbitrary keys.
>>
>> That is, even though it potentially could be unambiguous (the compiler
>> could ensure that it is indeed a range type before allowing the
>> conversion of 0 to first), there would be confusion where [0] *didn't*
>> mean first element.
>>
>>
>>> Advantages: unified syntax, increased flexibility with opIndexAssign
>>> and opIndexOpAssign. Disadvantages: breaks all range-oriented code out
>>> there.
>>
>> opIndexOpAssign is really the only improvement. I think code-wise,
>> things would get uglier. For example, in a bidirectional range, front()
>> and back() do not simply translate to some index that can be applied to
>> another function. However, in random-access ranges, front() can simply
>> be defined as opIndex. So for, random-access ranges, code gets shorter,
>> but not necessarily simpler (slightly less boilerplate) but
>> bidirectional ranges get much uglier (static ifs and the like).
>>
>> But I agree the opIndexOpAssign is a missing piece for front and back.
>>
>> But let's think about something else -- you want first and last, but
>> front and back also work. What if we continued to use front and back,
>> kept the current functions, but treated r[front] and r[back] as you
>> say? Then, you'd have some rule for r[front] like:
>>
>> 1. if front() is defined, translates to r.front() 2. if opIndex is
>> defined, translates to r.opIndex[0], although I would prefer some
>> symbol other than 0, because I'd like to use it on a custom map type to
>> mean "front element".
>>
>> And then all existing ranges still work, with the new syntax, and you
>> can keep the clear separation of functions for when it makes sense.
> 
> I'd also like r.rest or r[rest] (meaning lisp's cdr = all but first).
> Rather frequent needs in functions à la reduce (take first element as
> seed, then iterate/recurse on rest), and recursive functional algorithms
> in general. What do you think?
> 
> functional 'in':
> 
>  bool contains (E) (E[] elements, E element) {
>   if (elements.length == 0)
>  return false;
>  if (elements[0] == element)
>  return true;
>  return contains(elements.rest(), element);
> }
> 
> [By the way, I'm looking for a clear explanation of how tail recursion
> elimination is typically implemented: pointer welcome off list if you
> have that.]

http://en.wikipedia.org/wiki/Tail_recursion#Implementation_methods


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 04:44:43 +0200, so wrote:

>> Unfortunately DMC is always out of the question because the performance
>> is 10-20 (years) behind competition, fast compilation won't help it.
> 
> Can you please give a few links on this?

What kind of proof you need then? Just take some existing piece of code 
with high performance requirements and compile it with dmc. You lose.

http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
http://lists.boost.org/boost-testing/2005/06/1520.php
http://www.digitalmars.com/d/archives/c++/chat/66.html
http://www.drdobbs.com/cpp/184405450

Many of those are already old. GCC 4.6, LLVM 2.9, and ICC 12 are much 
faster, especially on multicore hardware. A quick look at DMC changelog 
doesn't reveal any significant new optimizations durin the past 10 years 
except some Pentium 4 opcodes and fixes on library level.

I rarely see a benchmark where DMC produces fastest code. In addition, 
most open source projects are not compatible with DMC's toolchain out of 
the box. If execution performance of the generated code is your top 
priority, I wouldn't recommend using DigitalMars products.


Re: Qt C++ GUI library is now set to die, as a result of the MS takeover

2011-02-13 Thread retard
Mon, 14 Feb 2011 13:03:42 +1300, Nick_B wrote:

> I'm also scratching my head on this, in terms of what Nokia gets out of
> this. They are essentially trading a larger, more successful, more
> established, platform and ecosystem (Symbian) and large developer mind
> share, for a much smaller, much less successful, much less developer
> mind share platform and ecosystem (Win Phone 7 and Silverlight).

Money? http://www.istockanalyst.com/article/viewiStockNews/
articleid/4886090


Re: tooling quality and some random rant

2011-02-13 Thread retard
Sun, 13 Feb 2011 15:06:46 -0800, Brad Roberts wrote:

> On 2/13/2011 3:01 PM, Walter Bright wrote:
>> Michel Fortin wrote:
>>> But note I was replying to your reply to Denis who asked specifically
>>> for demangled names for missing symbols. This by itself would be a
>>> useful improvement.
>> 
>> I agree with that, but there's a caveat. I did such a thing years ago
>> for C++ and Optlink. Nobody cared, including the people who asked for
>> that feature. It's a bit demotivating to bother doing that again.
> 
> No offense, but this argument gets kinda old and it's incredibly weak.
> 
> Today's tooling expectations are higher.  The audience isn't the same. 
> And clearly people are asking for it.  Even the past version of it I
> highly doubt no one cared, you just didn't hear from those that liked
> it.  After all, few people go out of their way to talk about what they
> like, just what they don't.

Half of the readers have already added me to their killfile, but here 
goes some on-topic humor:

http://www.winandmac.com/wp-content/uploads/2010/03/ipad-hp-fail.jpg

Sometimes people don't yet know what they want.

For example the reason we write portable C++ in some projects is that 
it's easier to switch between VC++, ICC, GCC, and LLVM. Whichever 
produces best performing code. Unfortunately DMC is always out of the 
question because the performance is 10-20 behind competition, fast 
compilation won't help it.


Re: tooling quality and some random rant

2011-02-13 Thread retard
Sun, 13 Feb 2011 19:10:01 +0100, Andrej Mitrovic wrote:

> On 2/13/11, Alan Smithee  wrote:
>> Andrej Mitrovic Wrote:
>>
>>> Could you elaborate on that? Aren't .di files supposed to be auto-
>> generated by the compiler, and not hand-written?
>>
>> Yea, aren't they? How come no one uses that feature? Perhaps it's
>> intrinsically broken? *hint hint*
>>
>>
>> This NG assumes a curious stance. Sprouting claims and standing by them
>> until they're shown invalid, and then some. This is not the way to go
>> for a new language. It's YOUR job (not yours in particular, Andrej) to
>> demonstrate the feasibility of a certain feature, ONLY THEN can you
>> claim how it may solve any issues. And it needs to be more than a
>> 10-line Hello World. Because you can concatenate Hello World 1,000,000
>> times, D can work for multi million line projects, right?
>>
>> "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4
>> which D has been past the 1.0 version. How many people gave up on their
>> med/large projects and moved to "lesser" languages in this span?
>>
>>
> On 2/13/11, em...@example.com  wrote:
>> Andrej Mitrovic Wrote:
>>
>>> Could you elaborate on that? Aren't .di files supposed to be auto-
>> generated by the compiler, and not hand-written?
>>
>> Yea, aren't they? How come no one uses that feature? Perhaps it's
>> intrinsically broken? *hint hint*
>>
>>
>> This NG assumes a curious stance. Sprouting claims and standing by them
>> until they're shown invalid, and then some. This is not the way to go
>> for a new language. It's YOUR job (not yours in particular, Andrej) to
>> demonstrate the feasibility of a certain feature, ONLY THEN can you
>> claim how it may solve any issues. And it needs to be more than a
>> 10-line Hello World. Because you can concatenate Hello World 1,000,000
>> times, D can work for multi million line projects, right?
>>
>> "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4
>> which D has been past the 1.0 version. How many people gave up on their
>> med/large projects and moved to "lesser" languages in this span?
>>
>>
> Heh. :)
> 
> I'm not claiming that I know that everything works, I only know as much
> as I've tried. When I've hit a bug in a multi-thousand line project I'll
> report it to bugzilla.
> 
> So what's broken about generating import modules, is it already in
> bugzilla? I've only heard about problems with templates so far, so I
> don't know. If they're really broken we can push  Walter & Co. to fix
> them.
> 
> I know of a technique, too. I've heard posting a random comment on a D
> reddit thread about a D bug usually gets Andrei to talk with Walter in
> private ASAP and fix it right away.

I wish there were more news about D. This would bring us more reddit 
threads and thus more bug fixes.


Re: Stupid little iota of an idea

2011-02-13 Thread retard
Sun, 13 Feb 2011 08:32:31 +0200, so wrote:

>> 1. and .1 are very minor improvements mainly for the laziest developers
>> out there. It's getting harder and harder to get rid of them. Avoiding
>> these kind of conflicts between core language features should be
>> priority #1.
> 
> For lazy developers? i don't think so, how lazy one can get anyways,
> after all we are not typists.
> We most of the time think (i can't be the judge here actually), rarely
> type.
> 
> I would love to see the reasoning on this one, and how successfully made
> it into most if not all languages.
> Sometimes i think designers make this kind of decisions for their
> depressive times. In those times they remember this and laugh how they
> fooled the whole world.

Might be :-D


Re: Stupid little iota of an idea

2011-02-12 Thread retard
Sat, 12 Feb 2011 19:42:59 +0200, Max Samukha wrote:

> On 02/12/2011 07:12 PM, retard wrote:
>>
>> You're just arguing against his principles:
>>
>> "..besides arguments ad populum are fallacious"
>>
>> http://www.digitalmars.com/webnews/newsgroups.php?
>> art_group=digitalmars.D&article_id=129453
>>
>>
> Yes, I use ad populum all the time for its effectiveness.
> 
> I'll try to wriggle out by saying it was not an argument for "iota" but
> rather counterargument to the ad populum argument that "iota" is bad
> since it exists only in the long-forgotten APL and an unknown C++
> extension.

I can't deny facts. Iota is indeed quite widespread. I've seen it in 
several languages. However programming languages are like DNA. Even bad 
syntax sometimes gets in and becomes hard to remove.

Just a day or two ago bearophile showed how the octal literal syntax is 
harmful. But what happened is that it spread from C to C++, Java, and 
even Scala. Same can be said about the floating point literal syntax. Both 
1. and .1 are very minor improvements mainly for the laziest developers 
out there. It's getting harder and harder to get rid of them. Avoiding 
these kind of conflicts between core language features should be priority 
#1.


Re: Stupid little iota of an idea

2011-02-12 Thread retard
Sat, 12 Feb 2011 17:54:24 +0200, Max Samukha wrote:

> On 02/12/2011 04:52 PM, Jonathan M Davis wrote:
>> On Saturday 12 February 2011 06:21:15 bearophile wrote:
>>> Jonathan M Davis:
 On Saturday 12 February 2011 03:25:29 Andrei Alexandrescu wrote:
> And that's part of what makes it best.

 Agreed.
>>>
>>> If you agree on that, then you can't be a designer for a public API.
>>
>> I'm not saying that you should typically pick function names that way.
>> But given that we already have iota, have already had iota for some
>> time, and that there is already a C++ function by the same name that
>> does the same thing, I see no reason to change it. It's nice and
>> memorable, and it doesn't create confusion based on misunderstanding
>> its name. Sure, a name that clearly says what it does would be nice,
>> but I don't really like any of the names that have been suggested, and
>> iota has worked just fine thus far.
> 
> Andrei's minion in me is feeling the urge to add that "iota" is also
> used in Go (for generating consecutive integers at compile-time,
> http://golang.org/doc/go_spec.html#Iota), and since Go is supposed to
> grow popular, "iota" will gain more popularity as well.

You're just arguing against his principles:

"..besides arguments ad populum are fallacious"

http://www.digitalmars.com/webnews/newsgroups.php?
art_group=digitalmars.D&article_id=129453



Re: Is D still alive?

2011-01-31 Thread retard
Mon, 31 Jan 2011 11:53:34 -0800, Jonathan M Davis wrote:

> On Monday, January 31, 2011 11:31:29 Jesse Phillips wrote:
>> Trass3r Wrote:
>> > > I've chosen to only work with D1/Tango from start, and I simply
>> > > don't recognize the frustration many are feeling. I'm only
>> > > concerned over that there ARE quite a few developers that seems to
>> > > have been turned off by instability, and the Phobos/Tango-problem.
>> > 
>> > Well, if nobody acted as a guinea pig, no issues would be uncovered
>> > ;) And though I already encountered several blocker bugs myself I got
>> > the feeling that the situation has become way better. Of course if,
>> > for some reason, you absolutely need x64 or have a hard deadline for
>> > your project then D1 is probably the better way to go.
>> 
>> Andrei put for the question once of, "How many issues would users run
>> across if they stuck to those features that are also available in
>> v1.0?"
>> 
>> I think the answer would be more then sticking with a D1 compiler, but
>> not nearly the number people do hit, which is also diminishing rapidly.
>> 
>> I do not think there is an issue with using D2 in a new project, but if
>> you have to ask you probably should go with D1. I say this because
>> someone who is aware of the issues present in the language is able to
>> decide if their desired project would be hindered by the bug. There are
>> definitely some projects, with constraints which would not make D a
>> very good choice.
>> 
>> For example D would make a great language on an embedded device, but
>> currently the first one to take it on will have a massive overhead to
>> make it work.
> 
> Personally, I find that it's issues such as not being able to link C or
> C++ code compiled by Microsoft's compiler with code compiled by dmd
> which would stop be me from being able to use D in projects at work. The
> stability of the compiler is an issue, but the linker issue totally
> kills it before the stability issue would even come up. Pretty much
> everything I work on at work has to run on both Linux and Windows (and
> soon Mac OS X), and we use Microsoft's compiler here, so D could would
> _have_ to be able to link with code compiled by Microsoft's compiler.
> The issue of D1 or D2 is completely irrelevant.

I don't do Windows development, but not being able to use popular third 
party development tools because of object file format issues sounds like 
a huge problem. I did a quick look at the digitalmars site. The 
limitation is only mentioned once in the FAQ section. A competent 
programmer might also discover that by reading the optlink page.

Also no mention of the quality of D2 toolchain. "1.030 stable", "1.066 
latest", and mysterious "2.051". I would assume it's stable. But like 
Ulrik said "the kind of bug-reports I hear frustration around, it seems 
D2 should still be considered beta"


Re: Is D still alive?

2011-01-31 Thread retard
Mon, 31 Jan 2011 11:43:37 -0500, Steven Schveighoffer wrote:

> On Fri, 28 Jan 2011 20:16:54 -0500, Walter Bright
>  wrote:
> 
>> Steven Schveighoffer wrote:
>>> I can't buy "enterprise" support,
>>
>> Of course you can!
> 
> No really, I can't afford it ;)
> 
> But seriously, I find it hard to believe that you can buy enterprise
> support for D if it means that you do the work.  There's only one you. 
> So at some point, you might be spread too thin between adding new
> features, posting to this newsgroup, and supporting all enterprise
> customers.
> 
> Any estimate you can give on how many such customers you have?

The fact that the final specification and design rationale of D is 
undocumented and in Walter's head means that no other person can sell 
that kind of deep enterprise support because it's not clear how the 
language should work. The rest of us can only guess. It also means that 
the more Walter spends time on enterprise support, the less he has time 
to work on D. The best for D might be to not buy any support at all. All 
the conferences and events are just distracting D's development.

I think the same applies to Phobos 2.. only Andrei knows the design well 
enough and knows how it's going to change in the future. No matter how 
much time one spends studying D or the ecosystem or how D is used in the 
enterprise world, one simply can't obtain any reasonable level of 
knowledge to become a "certified" authority in this community.

About the enterprise support... I haven't seen any material from Walter 
targeting professional D developers, only advertisements for people who 
have never used D. Maybe the hardcore stuff isn't publicly available.

The commercial language consultancy support I've seen is that consultants 
with 20+ years of enterprise "C++ experience" teach young developers with 
only ~1-5 years of enterprise experience with the platform. Typically 
even the fresh juniors have some experience with the platform (via 
university training) and the in-house seniors with 3+ years of experience 
help them to get more familiar with the platform used in the company. 
It's also very rare to only focus on the language, usually the frameworks 
and toolchain are the major culprits. YMMV of course and the world is 
full of all kinds of bullshit consultancy.


Re: Smartphones and D

2011-01-31 Thread retard
Mon, 31 Jan 2011 12:04:11 +0100, dennis luehring wrote:

>> While workstations for developers have bigger and completely different
>> requirements, in general the most demanding applications for ordinary
>> sixpack-joe are hd-video transcoding (which actually isn't memory
>> intensive), image manipulation (this year's basic $100 models already
>> sport a sensor of 14 megapixels =>  45 MB per image layer), and
>> surprisingly web browsing.
>>
>> The ARM equipment support this by providing powerful co-processors and
>> having a tiny (Thumb) instruction set. It's really hard to see where
>> they would need more than 4 GB of RAM.. even according to Moore's law
>> it will take at least 6 years for the top of the line products to use
>> this much memory.
> 
> but they work on 64bit:
> http://www.computerworld.com/s/article/9197298/
Arm_readies_processing_cores_for_64_bit_computing

What this means is that the same add/sub/mul/div calculator program which 
previously needed 2000 bytes of RAM on my grandfather's PDA soon uses 500 
GB.


Re: Smartphones and D

2011-01-31 Thread retard
Sun, 30 Jan 2011 19:36:44 +0100, Daniel Gibson wrote:

> Am 30.01.2011 13:29, schrieb Michel Fortin:
>> On 2011-01-30 03:05:59 -0500, Gary Whatmore  said:
>>
>>> D's main focus currently is 32-bit x86 servers and desktop
>>> applications. This is where the big market has traditionally been. Not
>>> everyone has 64-bit hardware and I have my doubts about the size of
>>> the smartphone markets.
>>
>> I think the important point here is ARM, not smartphones.
>>
>> ARM processors will soon start to enter other markets, mainly the
>> server and laptop markets,
> 
> I'm not sure about these markets, because ARM is stuck to 32bit, 64bit
> ARM seems to be (almost?) impossible as far as I know.

It will take years before the 64-bit address space starts to make sense 
in portable systems.

While workstations for developers have bigger and completely different 
requirements, in general the most demanding applications for ordinary 
sixpack-joe are hd-video transcoding (which actually isn't memory 
intensive), image manipulation (this year's basic $100 models already 
sport a sensor of 14 megapixels => 45 MB per image layer), and 
surprisingly web browsing.

The ARM equipment support this by providing powerful co-processors and 
having a tiny (Thumb) instruction set. It's really hard to see where they 
would need more than 4 GB of RAM.. even according to Moore's law it will 
take at least 6 years for the top of the line products to use this much 
memory.


Re: Is D not-for-profit or not?!

2011-01-30 Thread retard
Sun, 30 Jan 2011 09:06:57 -0500, Heywood Floyd wrote:

> Jeff Nowakowski Wrote:
> 
>> There's nothing wrong with being in it for money, but it would be nice
>> to know up front and in what manner.
> 
> 
> I've been meaning to ask, and I'll just take this oppurtunity, and it
> relates to what Jeff just said:
> 
> If one would like to donate money to D, how would one do that? Would it
> even make any sense? Or be needed?
> 
> And this naturally raises the question: Who/what owns D? Is it a
> non-profit, a group of people, or a business? And regardless of who owns
> D, is there any D-only organisation that one could support, financially?
> I'm not demanding an answer, I'm just sharing my thoughts.
> 
> I mean, it would feel weird to donate money to Digital Mars, a
> for-profit company, that does all kinds of things, including C++, right?
> If I was to feel confident in donating it would have to be to some sort
> of formally founded non-profit legal body with some sort of constitution
> like "to further the development of D" or something. I don't know how
> these things work. I guess right now D is too small and the legal cost
> of just maintaining such an organisation would surpass any donations
> anyway.

D is basically Walter's language. He decides what goes in and how stuff 
works. People who live nearby are somewhat able to influence the process.

So far it doesn't look like any earmarked money has been used to buy 
specific features. For example I doubt that even if you donate one 
million USD, they won't rename the keywords or __traits into something 
readable or add built-in first class tuples. I also doubt you can make 
the dmc/dmd backend FOSS with any sum of money. If you wanted some 
changes badly, I'd recommend donating the money to some democratic 
community language without any BDFL persons. 

I once saw that money has been used to support dsource / tango 
development. Phobos OTOH is Andrei's child. I bet he earns at least 
$2 per month at facebook so you would need to be extremely rich to 
persuade him or give something useful in return such as free time.


Re: Is D still alive?

2011-01-28 Thread retard
Fri, 28 Jan 2011 16:14:27 +, Roman Ivanov wrote:

> == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s
> article
>> On 1/27/11 8:02 PM, Walter Bright wrote:
>> > I think one of the reasons DbC has not paid off is it still requires
>> > a significant investment of effort by the programmer. It's too easy
>> > to not bother.
>> One issue with DbC is that its only significant advantage is its
>> interplay with inheritance. Otherwise, scope() in conjunction with
>> assert works with less syntactic overhead. So DbC tends to shine with
>> large and deep hierarchies... but large and deep hierarchies are not
>> that a la mode anymore.
> 
> DbC opens many interesting possibilities if it's supported by tools
> other than just the compiler. MS has included it in .NET 4.0:
> 
> http://research.microsoft.com/en-us/projects/contracts/

Mono 2.8 also seems to support those:

http://www.mono-project.com/news/archive/2010/Oct-06.html


Re: Is D still alive?

2011-01-28 Thread retard
Fri, 28 Jan 2011 10:14:04 -0500, Steven Schveighoffer wrote:

> On Thu, 27 Jan 2011 04:59:18 -0500, retard  wrote:
> 
>> Wed, 26 Jan 2011 15:35:19 -0500, Steven Schveighoffer wrote:
>>
>>> I'd suggest to anyone looking to use D for something really big to try
>>> and "prove" out how well D will perform for you by coding up bits of
>>> your whole project that you think will be needed.  Hopefully, you can
>>> do everything without hitting a mercy bug and then you can write your
>>> full project in it.
>>
>> I think this reveals a lot about D. You still need to prove things. Or
>> maybe the community members in general aren't very good developers;
>> they can't see the potential of this language. The fact is, no matter
>> what language you choose, if it isn't a complete joke, you can finish
>> the project. (I'm assuming the community members here won't be writing
>> any massive projects which are not possible to do in C++ or PHP or
>> Java.)
> 
> I fully see the potential of the language, but I've also experienced
> that a one (or two or three) man compiler team does not fix bugs on *my*
> schedule.  I can't buy "enterprise" support, so any bugs I may hit, I'm
> just going to have to wait for Walter and Co. to get around to them. 
> Not a problem for me, because I'm not developing with D professionally.

I agree.
 
> But if I was going to base a software company on D, I'd be very nervous
> at this prospect.

Exactly.

> 
> I think as D matures
> and hopefully gets more enterprise support, these problems will be
> history.

This is the classic chicken or the egg problem. I'm not trying to be 
unnecessarily mean. Enterprise support is something you desperately need. 
Consider dsource, wiki4d, d's bugzilla etc. It's amazing how much 3rd 
party money and effort affects the development. Luckily many things are 
also free nowadays such as github.

> 
>> I don't see any need to prove how well Haskell works. Even though it's
>> a "avoid success at all costs" experimental research language. It just
>> works. I mean to the extent that I'm willing to go with these silly
>> test projects that try to prove something.
> 
> The statements I made are not a property of D, they are a property of
> the lack of backing/maturity.  I'm sure when Haskell was at the same
> maturity stage as D, and if it had no financial backing/support
> contracts, it would be just as much of a gamble.

But Haskell developers have uninterruptedly received funding during the 
years.

> You seem to think that D is inherently flawed because of D, but it's
> simply too young for some tasks.  It's rapidly getting older, and I
> think in a year or two it will be mature enough for most projects.

I've heard this before. I've also heard the 64-bit port and many other 
things are done in a year/month or two. The fact is, you're overly 
optimistic and these are all bullshit. When I come back here in a year or 
two, I have full justification to laugh at your stupid claims.


Re: DSource (Was: Re: Moving to D )

2011-01-28 Thread retard
Fri, 28 Jan 2011 15:03:24 +, Bruno Medeiros wrote:

> 
> I know, I know. :)  (I am up-to-date on D.announce, just not on "D" and
> "D.bugs")
> I still wanted to make that point though. First, for retrospection, but
> also because it may still apply to a few other DSource projects (current
> or future ones).

You don't need to read every post here. Reading every bug report is just 
stupid.. but it's not my problem. It just means that the rest of us have 
less competition in everyday situations (getting women, work offers, and 
so on)


Re: Hot for dmd 64bit

2011-01-27 Thread retard
Thu, 27 Jan 2011 06:32:58 +, dwilson wrote:

> Beside praying and pestering, what can we D non-experts do to help get a
> stable 64-bit dmd available?
> 
> Killer D features are strings, slick built in dynamics arrays, no
> headers files to keep in sync, and the other nice features often praised
> by others.   I'm not sure yet that D is my favorite language, but it's
> in the list of top three.
> 
> Killing D (at least for me) is the limit choices for compiling on 64-bit
> Linux with D2 and preferably Phobos instead of Tango.  My setup, for
> reasons I haven't investigated deeply, can't run 32-bit anything, and I
> do intend to work on huge arrays of data, a few GB in RAM.  As for
> Phobos, it's obviously more Mars-related than "Tango" :)

Didn't Walter say about one year ago that it only takes 1-2 months to 
finish the 64-bit port.


Re: Is D still alive?

2011-01-27 Thread retard
Wed, 26 Jan 2011 23:33:54 +0100, Trass3r wrote:

>> For me, D's killer features were string handling (slicing and
>> appending/concatenation) and *no header files*. (No more header files!!
>> Yay!!!). But auto is fantastic too though, I get sooo much use out of
>> that.
> 
> Getting rid of the pointer crap (proper arrays, bounds checking, classes
> as reference types,...) is definitely among the top 10 on my list.

Yep, those were to reasons that lured me to learn Java. However, those 
were not the reasons to learn D. The main reasons were RAII and Design by 
Contract. Even funnier, it took D about 9 years to fix the main bug in DbC 
(contract inheritance).


Re: Is D still alive?

2011-01-27 Thread retard
Wed, 26 Jan 2011 14:09:25 -0800, Walter Bright wrote:

> Trass3r wrote:
>> But once you had a test drive, you just can't get out anymore.
> 
> I've had more than one longtime C++ expert tell me that after using D
> for a while, then for work reasons get forced back into C++, just find
> themselves cringing every time they edit it.

I also got brainwashed by the C++ advocates years ago. However, I didn't 
need D to see how terrible writing C++ is. C++ sure is a powerful 
language and sometimes a necessary evil, but you don't really need very 
strong doses of more recent languages to see how much nicer everything 
else is.


Re: Is D still alive?

2011-01-27 Thread retard
Wed, 26 Jan 2011 15:35:19 -0500, Steven Schveighoffer wrote:

> I'd suggest to anyone looking to use D for something really big to try
> and "prove" out how well D will perform for you by coding up bits of
> your whole project that you think will be needed.  Hopefully, you can do
> everything without hitting a mercy bug and then you can write your full
> project in it.

I think this reveals a lot about D. You still need to prove things. Or 
maybe the community members in general aren't very good developers; they 
can't see the potential of this language. The fact is, no matter what 
language you choose, if it isn't a complete joke, you can finish the 
project. (I'm assuming the community members here won't be writing any 
massive projects which are not possible to do in C++ or PHP or Java.)

I don't see any need to prove how well Haskell works. Even though it's a 
"avoid success at all costs" experimental research language. It just 
works. I mean to the extent that I'm willing to go with these silly test 
projects that try to prove something.


Re: Is D still alive?

2011-01-27 Thread retard
Wed, 26 Jan 2011 14:39:08 -0500, Steven Schveighoffer wrote:

> I will warn you, once you start using D, you will not want to use
> something else.  I cringe every day when I have to use PHP for work.

Nice trolling.


Re: DVCS

2011-01-22 Thread retard
Sat, 22 Jan 2011 14:47:48 -0800, Walter Bright wrote:

> retard wrote:
>> Does the new Ubuntu overall work better than the old one? Would be
>> amazing if the media players are still all broken.
> 
> I haven't tried the sound yet, but the video playback definitely is
> better.
> 
> Though the whole screen flashes now and then, like the video mode is
> being reset badly. This is new behavior.

Ubuntu probably uses Compiz if you have enabled desktop effects. This 
might not work with ati's (open source) drivers. Turning Compiz off makes 
it use a "safer" 2d engine. In Gnome the setting can be changed here 
http://www.howtoforge.com/enabling-compiz-fusion-on-an-ubuntu-10.10-
desktop-nvidia-geforce-8200-p2

It's the "none" option in the second figure.


Re: DVCS

2011-01-22 Thread retard
Sat, 22 Jan 2011 13:12:26 -0800, Walter Bright wrote:

> Vladimir Panteleev wrote:
>> http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-
from-ubuntu-10-04-lucid-lynx/
> 
> Thanks for finding that. But I think I'll stick for now with the ipod's
> calendar. It's more useful anyway, as it moves with me.

Does the new Ubuntu overall work better than the old one? Would be 
amazing if the media players are still all broken.


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread retard
Sat, 22 Jan 2011 00:58:59 -0800, Walter Bright wrote:

> Gour wrote:
>> I'm very seriously considering to put PC-BSD on my desktop and of
>> several others in order to reduce my admin-time required to maint. all
>> those machines.
> 
> OSX is the only OS (besides DOS) I've had that had painless upgrades.
> Windows upgrades never ever work in place (at least not for me). You
> have to wipe the disk, install from scratch, then reinstall all your
> apps and reconfigure them.
> 
> You're hosed if you lose an install disk or the serial # for it.
> 
> Ubuntu isn't much better, but at least you don't have to worry about
> install disks and serial numbers. I just keep a list of sudo apt-get
> commands! That works pretty good until the Ubuntu gods just decide to
> drop kick your apps (like sunbird) out of the repository.

Don't blame Ubuntu, http://en.wikipedia.org/wiki/Mozilla_Sunbird

"It was developed as a standalone version of the Lightning calendar and 
scheduling extension for Mozilla Thunderbird. Development of Sunbird was 
ended with release 1.0 beta 1 to focus on development of Mozilla 
Lightning.[6][7]"

Ubuntu doesn't drop support for widely used software. I'd use Google's 
Calendar instead.


Re: DVCS

2011-01-20 Thread retard
Thu, 20 Jan 2011 13:33:58 +0100, Gour wrote:

> On Thu, 20 Jan 2011 06:39:08 -0500
> Jeff Nowakowski  wrote:
> 
> 
>> No, I haven't tried it. I'm not going to try every OS that comes down
>> the pike.
> 
> Then please, without any offense, do not give advises about something
> which you did not try. I did use Ubuntu...
> 
>> So instead of giving you a bunch of sane defaults, you have to make a
>> bunch of choices up front.
> 
> Right. That's why there is no need for separate distro based on DE user
> wants to have, iow, by simple: pacman -Sy xfce4 you get XFCE environment
> installed...same wit GNOME & KDE.

It's the same in Ubuntu. You can install the minimal server build and 
install the DE of your choice in similar way. The prebuilt images 
(Ubuntu, Kubuntu, Xubuntu, Lubuntu, ...) are for those who can't decide 
and don't want to fire up a terminal for writing down bash code. In 
Ubuntu you have even more choice. The huge metapackage or just the DE 
packages, with or without recommendations. A similar system just doesn't 
exist for Arch. For the lazy user Ubuntu is a dream come true - you never 
need to launch xterm if you don't want. There's a GUI for almost 
everything.

> 
>> That's a heavy investment of time, especially for somebody unfamiliar
>> with Linux.
> 
> Again, you're speaking without personal experience...

You're apparently a Linux fan, but have you got any idea which BSD or 
Solaris distro to choose? The choice isn't as simple if you have zero 
experience with the system. 

> 
> Moreover, in TDPL's foreword, Walter speaks about himself as "..of an
> engineer..", so I'm sure he is capable to handle The Arch Way (see
> section Simplicity at https://wiki.archlinux.org/index.php/Arch_Linux)
> which says: "The Arch Way is a philosophy aimed at keeping it simple.

I think Walter's system isn't up to date because he is a lazy bitch. Has 
all the required competence but never bothers to update if it just works 
(tm). The same philosophy can be found in dmd/dmc. The code is sometimes 
hard to read and hard to maintain and buggy, but if it works, why fix it?

> The Arch Linux base system is quite simply the minimal, yet functional
> GNU/Linux environment; the Linux kernel, GNU toolchain, and a handful of
> optional, extra command line utilities like links and Vi. This clean and
> simple starting point provides the foundation for expanding the system
> into whatever the user requires." and from there install one of the
> major DEs (GNOME, KDE or XFCE) to name a few.

I'd give my vote for LFS. It's quite minimal.

> 
>> The upgrade problems are still there. *Every package* you upgrade has a
>> chance to be incompatible with the previous version. The longer you
>> wait, the more incompatibilities there will be.
> 
> There are no incompatibilities...if I upgrade kernel, it means that
> package manager will figure out what components has to be updated...
> 
> Remember: there are no packages 'tagged' for any specific release!

Even if the package manager works perfectly, the repositories have bugs 
in their dependencies and other metadata.

> 
>> Highlighting the problem of waiting too long to upgrade. You're
>> skipping an entire release. I'd like to see you take a snapshot of Arch
>> from 2008, use the system for 2 years without updating, and then
>> upgrade to the latest packages. Do you think Arch is going to magically
>> have no problems?
> 
> I did upgrade on my father-in-law's machine which was more then 1yr old
> without any problem.
> 
> You think there must be some magic to handle it...ask some FreeBSD user
> how they do it. ;)

There's usually a safe upgrade period. If you wait too much, package 
conflicts will appear. It's simply too much work to keep rules for all 
possible package transitions. For example libc update breaks kde, but 
it's now called kde4. The system needs to know how to first remove all 
kde4 packages and update them. Chromium was previously a game, but now 
it's a browser, the game becomes chromium-bsu or something. I have hard 
time believing the minimal Arch does all this.


Re: Potential patent issues

2011-01-19 Thread retard
Wed, 19 Jan 2011 15:44:38 -0500, Nick Sabalausky wrote:

> "Andrej Mitrovic"  wrote in message
> news:mailman.724.1295465996.4748.digitalmar...@puremagic.com...
>> Or pack your bags and move to Europe. :p
> 
> I thought Europe was getting software patents?

It's the US intellectual property mafia pushing software patents to EU 
via WIPO, bribes, and extortion.


Re: What Makes A Programming Language Good

2011-01-19 Thread retard
Wed, 19 Jan 2011 20:01:28 +, Adam Ruppe wrote:

>> I meant that if the latest version 0.321 of the project 'foobar'
>> depends on 'bazbaz 0.5.8.2'
> 
> Personally, I'd just prefer people to package their damned dependencies
> with their app
> 
> But, a configuration file could fix that easily enough. Set one up like
> this:
> 
> 
> bazbaz = http://bazco.com/0.5.8.2/
> 
> 
> Then it'd try to download http://bazco.com/0.5.8.2/bazbaz.module.d
> instead of the default site (which is presumably the latest version).
> 
> This approach also makes it easy to add third party servers and
> libraries, so you wouldn't be dependent on a central source for your
> code.
> 
> 
> Here's a potential problem: what if bazbaz needs some specific version
> of something too? Maybe it could check for a config file on its server
> too, and use those directives when getting the library.

How it goes is you come up with more and more features if you spend some 
time THINKING about the possible functionality for such a tool. Instead 
of NIH, why don't you just study what the existing tools do and pick up 
all the relevant features. Why there are so many open source tools doing 
the exactly same thing is that developers are too lazy to study the 
previous work and start developing code before the common sense kicks in.


Re: DVCS (was Re: Moving to D)

2011-01-19 Thread retard
Wed, 19 Jan 2011 19:15:54 +, retard wrote:

> Wed, 19 Jan 2011 03:11:07 -0800, Walter Bright wrote:
> 
>> KennyTM~ wrote:
>>> You should use LF ending, not CRLF ending.
>> 
>> I never thought of that. Fixing that, it gets further, but still
>> innumerable errors:
>> 
>> 
>> [snip]
> 
> I already told you in message digitalmars.d:126586
> 
> "..your Ubuntu version isn't supported anymore. They might have already
> removed the package repositories for unsupported versions and that might
> indeed lead to problems"

So.. the situation is so bad that you can't install ANY packages anymore. 
Accidently removing packages can make the system unbootable and those 
application are gone for good (unless you do a fresh reinstall). My bet 
is that if it isn't already impossible to upgrade to a new version, when 
they remove the repositories for the next Ubuntu version, you're 
completely fucked up.


Re: What Makes A Programming Language Good

2011-01-19 Thread retard
Wed, 19 Jan 2011 19:41:47 +, Adam Ruppe wrote:

> retard wrote:
>> A build tool without any kind of dependency versioning support is a
>> complete failure.
> 
> You just delete the old files and let it re-download them to update. If
> the old one is working for you, simply keep it.

I meant that if the latest version 0.321 of the project 'foobar' depends 
on 'bazbaz 0.5.8.2' but also versions 0.5.8.4 - 0.5.8.11 (API but not ABI 
compatible) and 0.5.9 (mostly incompatible) and 0.6 - 0.9.12.3 (totally 
incompatible) exist, the build fails badly when downloading the latest 
library. If you don't document the versions of the dependencies anywhere, 
it's almost impossible to build to project even manually.


Re: What Makes A Programming Language Good

2011-01-19 Thread retard
Wed, 19 Jan 2011 13:56:17 +, Adam Ruppe wrote:

> Andrei wrote:
>>  We need a package system that takes Internet distribution
>> into account.
> 
> Do you think something like my simple http based system would work?
> 
> Fetch dependencies. Try to compile. If the linker complains about
> missing files, download them from http://somewebsite/somepath/filename,
> try again from the beginning.
> 
> There's no metadata, no version tracking, nothing like that, but I don't
> think such things are necessary. Worst case, just download the specific
> version you need for your project manually.

A build tool without any kind of dependency versioning support is a 
complete failure. Especially if it also tries to handle external non-D 
dependencies. It basically makes supporting all libraries with rapid API 
changes quite impossible.


Re: Potential patent issues

2011-01-19 Thread retard
Wed, 19 Jan 2011 12:50:46 -0500, Jeff Nowakowski wrote:

> On 01/18/2011 05:52 PM, BlazingWhitester wrote:
>>
>> Walter, could you give some comments about this? Does dmd violate
>> anything?
> 
> It's probably in Walter's best interest to not even look at it.
> 
> On the one hand, it's probably a crap software patent that the Patent
> Office has been handing out like candy, and removing basic features that
> have been patented could cripple D. Whoever owns it might not decide to
> sue, Walter's implementation might not infringe, it might be
> invalidated, etc.
> 
> On the other hand, if Walter is sued and found to have infringed the
> patent, and if he "willfully infringed", meaning he had knowledge of the
> patent, then he could face up to three times damages.

At least he knows it now unless he deliberately ignores all newsgroup 
posts containing the word 'patent'. I think only C# and D market the 
language with a feature 'delegate'. What's fun is that even if you're 
right and there's prior art or the patent is way too trivial, the patent 
trial can become very expensive. 


Re: DVCS (was Re: Moving to D)

2011-01-19 Thread retard
Wed, 19 Jan 2011 03:11:07 -0800, Walter Bright wrote:

> KennyTM~ wrote:
>> You should use LF ending, not CRLF ending.
> 
> I never thought of that. Fixing that, it gets further, but still
> innumerable errors:
> 

> [snip]

I already told you in message digitalmars.d:126586

"..your Ubuntu version isn't supported anymore. They might have already 
removed the package repositories for unsupported versions and that might 
indeed lead to problems"

It's exactly like using Windows 3.11 now. Totally unsupported. I'd so sad 
the leader of the D language is so incompetent with open source 
technologies. If you really want to stick with outdated operating system 
versions, why don't you install all the "stable" and "important" services 
on some headless virtual server (on another machine) and update the 
latest Ubuntu on your main desktop. It's hard to believe making backups 
of your /home/walter is so hard. That ought to be everything you need to 
do with desktop Ubuntu..


Re: DVCS (was Re: Moving to D)

2011-01-16 Thread retard
Sun, 16 Jan 2011 15:22:13 -0500, Nick Sabalausky wrote:

> Dude, you need to upgrade!!!

The CRTs have a limited lifetime. It's simply a fact that you need to 
switch to flat panels or something better. They won't probably even 
manufacture CRTs anymore. It becomes more and more impossible to purchase 
*unused* CRTs anywhere. At least at a reasonable price. For example used 
17" TFTs cost less than $40.

I found pages like this http://shopper.cnet.com/4566-3175_9-0.html

Even the prices aren't very competitive. I only remember that all refresh 
rates below 85 Hz caused me headache and eye fatigue. You can't use the 
max resolution @ 60 Hz for very long.


> Why should *I* spend the money to replace something that already
works fine for me?

You might get more things done by using a bigger screen. Maybe get some 
money to buy better equipment and stop complaining.

 
>> Besides, this whole changing the resolution thing is a consequence of
>> using crappy software. What you want is set the resolution to the
>> maximum and do the rest in software. And guess what - at their maximum,
>> CRT monitors suck compared to flat panels.
>>
>>
> Agreed, but show me an OS that actually *does* handle that reasonably
> well. XP doesn't. Win7 doesn't. Ubuntu 9.04 and Kubuntu 10.10 don't.
> (And I'm definitely not going back to OSX, I've had my fill of that.)

My monitors have had about the same pixel density over the years. EGA 
(640x400) or 720x348 (Hercules) / 12", 800x600 / 14", 1024x768 / 15-17", 
1280x1024 / 19", 1280x1024 / 17" TFT, 1440x900 / 19", 1920x1080 / 21.5", 
2560x1600 / 30"

Thus, there's no need to enlarge all graphical widgets or text. My vision 
is still ok. What changes is the amount of simultaneously visible area 
for applications. You're just wasting the expensive screen estate by 
enlarging everything. You're supposed to run more simultaneous tasks on a 
larger screen.


>> I've actually compared the rated power consumpsion between CRTs and
>> LCDs of
>> similar size and was actually surprised to find that there was little,
>> if any, real difference at all on the sets I compared.

>I'm pretty sure I did point out the limitations of my observation: "...on
>all the sets I compared". And it's pretty obvious I wasn't undertaking a
>proper extensive study. There's no need for sarcasm.

Your comparison was pointless. You can come up with all kinds of 
arbitrary comparisons. The TFT panel power consumption probably varies 
between 20 and 300 Watts. Do you even know how much your CRT uses power?

CRTs used as computer monitors and those used as televisions have 
different characteristics. CRT TVs have better brightness and contrast, 
but lower resolution and sharpness than CRT computer monitors. Computer 
monitors tend to need more power, maybe even twice as much. Also larger 
monitors of the same brand tend to use more power. When a CRT monitor 
gets older, you need more power to illuminate the phosphor as the amount 
of phosphor in the small holes of the grille/mask decreases over time.

This isn't the case with TFTs. The backlight brightness and panel's color 
handling dictates power consumption. A 15" TFT might need as much power 
as a 22" TFT using the same panel technology. TFT TVs use more power as 
they typically provide higher brightness. Same thing if you buy those 
high quality panels for professional graphics work. The TFT power 
consumption has also drastically dropped because of AMOLED panels, LED 
backlights and better dynamic contrast logic. The fluorescent backlights 
lose some of their brightness (maybe about 30%) before dying unlike a CRT 
which totally goes dark. The LED backlights wont suffer from this (at 
least observably).

My obversation is that e.g. in computer classes (30+ computers per room) 
the air conditioning started to work much better after the upgrade to 
flat panels. Another upgrade turned the computers into micro-itx thin 
clients. Now the room doesn't need air conditioning at all.


Re: DVCS (was Re: Moving to D)

2011-01-16 Thread retard
Sun, 16 Jan 2011 21:46:25 +0100, Andrej Mitrovic wrote:

> With CRTs I could spend a few hours in front of the PC, but after that
> my eyes would get really tired and I'd have to take a break. Since I
> switched to LCDs I've never had this problem anymore, I could spend a
> day staring at screen if I wanted to. Of course, it's still best to take
> some time off regardless of the screen type.

That's a good point. I've already forgotten how much eye strain the old 
monitors used to cause. 

> 
> Anyway.. how about that Git thing, then? :D

:)


Re: DVCS (was Re: Moving to D)

2011-01-16 Thread retard
Sun, 16 Jan 2011 12:34:36 -0800, Walter Bright wrote:

> Andrei Alexandrescu wrote:
>> Meanwhile, you are looking at a gamma gun shooting atcha.
> 
> I always worried about that. Nobody actually found anything wrong, but
> still.

It's like the cell phone studies. Whether they're causing brain tumors or 
not.


Re: DVCS (was Re: Moving to D)

2011-01-16 Thread retard
Sat, 15 Jan 2011 23:47:09 -0500, Nick Sabalausky wrote:

> Bumping up to a higher resolution can be good when dealing with images,
> or whenever you're doing anything that could use more screen real-estate
> at the cost of smaller UI elements. And CRTs are more likely to go up to
> really high resolutions than non-CRTs. For instance, 1600x1200 is common
> on even the low-end CRT monitors (and that was true even *before*
> televisions started going HD - which is *still* lower-rez than
> 1600x1200).

The standard resolution for new flat panels has been 1920x1080 or 
1920x1200 for a long time now. The panel size has slowly improved from 
12-14" to 21.5" and 24", the price has gone down to about $110-120. Many 
of the applications have been tuned for 1080p.

When I abandoned CRTs, the most common size was 17" or 19". Those 
monitors indeed supported resolutions up to 1600x1200 or more. However 
the best resolution was about 1024x768 or 1280x1024 for 17" monitors and 
1280x1024 or a step up for 19" monitors. I also had one 22" or 23" Sony 
monitor which had the optimal resolution of 1600x1200 or at most one step 
bigger. It's much less than what the low-end models offer now.

It's hard to believe you're using anything larger than 1920x1200 because 
the legacy graphics cards don't support very high resolutions, especially 
via DVI. For example I recently noticed a top of the line Geforce 6 card 
only supports resolutions up to 2048x1536 @ 85 Hz. Guess how it works 
with a 30" Cinema display HD @ 2560x1600. Another thing is subpixel 
antialiasing. You can't really do it without a TFT panel and digital 
video output.

> Yea, you can get super high resolution non-CRTs, but they're much more
> expensive. And even then, you lose the ability to do any real desktop
> work at a more typical resolution. Which is bad because for many things
> I do want to limit my resolution so the UI isn't overly-small. And yea,
> there are certian things you can do to scale up the UI, but I've never
> seen an OS, Win/Lin/Mac, that actually handled that sort of thing
> reasonably well. So CRTs give you all that flexibility at a sensible
> price.

You mean DPI settings?

> Also, it can be good when mirroring the display to TV-out or, better
> yet, using the "cinema mode" where any video-playback is sent fullscreen
> to the TV (which I'll often do), because those things tend to not work
> very well when the monitor isn't reduced to the same resolution as the
> TV.

But my TV happily accepts 1920x1080? Sending the same digital signal to 
both works fine here. YMMV

>> OTOH when he has a good CRT (high resolution, good refresh rate) there
>> may be little reason to replace it, as long as it's working.. apart
>> from the high power consumption and the size maybe.
>>
>>
> I've actually compared the rated power consumpsion between CRTs and LCDs
> of similar size and was actually surprised to find that there was
> little, if any, real difference at all on the sets I compared.

How much do the CRTs consume power? The max power consumption for LED 
powered panels has gone down considerably and you never use their max 
brightness. Typical power consumption of a modern 21.5" panel might stay 
between 20 and 30 Watts when you're just typing text.


Re: DVCS (was Re: Moving to D)

2011-01-16 Thread retard
Sun, 16 Jan 2011 11:56:34 +0100, Lutger Blijdestijn wrote:

> Nick Sabalausky wrote:
> 
>> "Andrei Alexandrescu"  wrote in message
>> news:igt2pl$2u6e$1...@digitalmars.com...
>>> On 1/15/11 2:23 AM, Nick Sabalausky wrote:
 I still use CRTs (one big reason being that I hate the idea of only
 being able to use one resolution)
>>>
>>> I'd read some post of Nick and think "hmm, now that's a guy who
>>> follows only his own beat" but this has to take the cake. From here
>>> on, I wouldn't be surprised if you found good reasons to use whale fat
>>> powered candles instead of lightbulbs.
>>>
>>>
>> Heh :)  Well, I can spend no money and stick with my current 21" CRT
>> that already suits my needs (that I only paid $25 for in the first
>> place), or I can spend a hundred or so dollars to lose the ability to
>> have a decent looking picture at more than one resolution and then say
>> "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo.
>> And popularity and trendyness are just non-issues.
> 
> Actually nearly all lcds below 600$-800$ price point (tn-panels) have
> quite inferior display of colors compared to el cheapo crt's, at any
> resolution.

There are also occasional special offers on IPS flat panels.

The TN panels have also improved. I bought a cheap 21.5" TN panel as my 
second monitor last year. The viewing angles are really wide, basically 
about 180 degrees horizontally, a tiny bit less vertically. I couldn't 
see any effects of dithering noise either. It has a DVI input and a power 
consumption of about 30 Watts max (I run it in eco mode). Now that both 
framerate and view angle problems have been more or less solved for TN 
panels (except in pivot mode), the only remaining problems is the color 
reproduction. But it only matters when working with photographs.


Re: DVCS (was Re: Moving to D)

2011-01-15 Thread retard
Sat, 15 Jan 2011 03:23:41 -0500, Nick Sabalausky wrote:

> "retard"  wrote in message
>> PSUs: Never ever buy the cheap models. There's a list of bad
>> manufacturers in the net. They make awful shit.
> 
> Another problem is that, as places like Sharky Extreme and Tom's
> Hardware found out while testing, it seems to be common practice for PSU
> manufacturers to outright lie about the wattage.

That's true. But it's also true that PSU efficiency and power have 
improved drastically. And their quality overall. In 1990s it was pretty 
common that computer stores mostly sold those shady brands with a more or 
less lethal design. There are lots of reliable brands now. If you're not 
into gaming, it hardly matters which (good) PSU you buy. They all provide 
300+ Watts and your system might consume 70-200 Watts, even under full 
load.

>> Monitors: The CRTs used to break every 3-5 years. Even the high quality
>> Sony monitors :-| I've used TFT panels since 2003. The inverter of the
>> first 14" TFT broke after 5 years of use. Three others are still
>> working, after 1-6 years of use.
>>
>>
> I still use CRTs (one big reason being that I hate the idea of only
> being able to use one resolution), and for a long time I've always had
> either a dual-monitor setup or dual systems with one monitor on each, so
> I've had a lot of monitors. But I've only ever had *one* CRT go bad, and
> I definitely use them for more than 5 years.
> 
> Also, FWIW, I'm convinced that Sony is *not* as good as people generally
> think. Maybe they were in the 70's or 80's, I don't know, but they're
> frequently no better than average.

I've disassembled couple of CRT monitors. The Sony monitors have had 
aluminium cased "modules" inside them. So replacing these should be 
relatively easy. They also had detachtable wires between these units.  
Cheaper monitors have three circuit boards (one for the front panel, one 
in the back of the tube and one in the bottom). It's usually the board in 
the bottom of the monitor that breaks, which means that you need to cut 
all wires to remove it in cheaper monitors. It's just this high level 
design that I like in Sony's monitors. Probably other high quality brands 
like Eizo also do this. Sony may also use bad quality discrete components 
like capacitors and ICs. I can't say anything about that.


Re: DVCS (was Re: Moving to D)

2011-01-14 Thread retard
Fri, 14 Jan 2011 21:02:38 +0100, Daniel Gibson wrote:

> Am 14.01.2011 20:50, schrieb Walter Bright:
>> Daniel Gibson wrote:
>>> But a few years ago it was a lot worse, especially with cheap inkjets.
>>> Many supported only GDI printing which naturally is best supported on
>>> Windows (GDI is a windows interface).
>>
>> Yeah, but I bought an *HP* laserjet, because I thought everyone
>> supported them well.
>>
>> Turns out I probably have the only orphaned HP LJ model.
> 
> Yes, the HP Laserjets usually have really good support with PCL and
> sometimes even Postscript.
> You said you've got a HP (Laserjet?) 2300? On
> http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that
> printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level
> 3.
> 
> Generally http://www.openprinting.org/printers is a really good page to
> see if a printer has Linux-support and where to get drivers etc.

I'm not sure if Walter's Ubuntu version already has this, but the latest 
Ubuntus automatically install all CUPS supported (USB) printers. I 
haven't tried this autodetection with parallel or network printers. The 
"easiest" way to configure CUPS is via the CUPS network interface 
( http://localhost:631 ). In some early Ubuntu versions the printer 
configuration was broken. You had to add yourself to the lpadmin group 
and whatnot. My experiences with printers are:

Linux (Ubuntu)

1. Plug in the cable
2. Print

Mac OS X

1. Plug in the cable
2. Print

Windows

1. Plug in the cable.
2. Driver wizard appears, fails to install
3. Insert driver cd (preferably download the latest drivers from the 
internet)
4. Save your work
4. Reboot
5. Close the HP/Canon/whatever ad dialog
6. Restart the programs and load your work
7. Print


Re: DVCS (was Re: Moving to D)

2011-01-14 Thread retard
Thu, 13 Jan 2011 19:04:59 -0500, Nick Sabalausky wrote:

> My failure list from most to least would be this:
> 
> 1. power supply / printer
> 2. optical drive / floppies (the disks, not the drives)
> 3. hard drive
> 4. monitor / mouse / fan

My list is pretty much the same. I bought a (Toshiba IIRC) dot matrix 
printer (the price was insane) in 1980s. It STILL works fine when printing 
ASCII text, but it's "a bit" noisy and slow. Another thing is, after 
upgrading from DOS, haven't found any drivers for printing graphics. On 
DOS, only some programs had specially crafted drivers for this printer 
and some had drivers for some other proprietary protocol the printer 
"emulates" :-)

My second printer was some Canon LBP in the early 90s. STILL works 
without any problems (still connected to my Ubuntu CUPS server), but it's 
also relatively slow and physically huge. I used to replace the toner and 
drums, toner every ~2 years (prints 1500-3000 pages of 5% text) and drum 
every 5-6 years. We bought it as used from a company. It had been 
repaired once by the official Canon service. After that, almost 20 years 
without repair.

I also bought a faster (USB!) laser printer from Brother couple of years 
ago. I've replaced the drum once and replaced the toner three times with 
some cheapo 3rd party stuff. It was a bit risky to buy a set of 10 toner 
kits along with the printers (even the laser printers are so cheap now), 
but it was an especially cheap offer and we thought the spare part prices 
go up anyway. The amortized printing costs are probably less than 3 cents 
per page.

Now, I've also bought Canon, HP, and Epson inkjets. What can I say.. The 
printers are cheap. The ink is expensive. They're slow, and result looks 
like shit (not very photo-realistic) compared to the online printing 
services. AND I've "broken" about 8 of them in 15 years. It's way too 
expensive to start buying spare parts (e.g. when the dry ink gets stuck 
in the ink "tray" in Canon printers). Nowadays I print photos using some 
online service. The inkjet printer quality still sucks IMO. Don't buy 
them.

PSUs: Never ever buy the cheap models. There's a list of bad 
manufacturers in the net. They make awful shit. The biggest problem is, 
if the PSU breaks, it might also break other parts which makes all PSU 
failures really expensive. I've bought Seasonic, Fortron, and 
Corsair PSUs since the late 1990s. They work perfectly. If some part 
fails, it's the PSU fan (or sometimes the fuse when switching the PSU on 
causes a surge). Fuses are cheap. Fans last much longer if you replace 
the engine oil every 2-4 years. Scrap off the sticker in the center of the 
fan and pour in appropriate oil. I'm not kidding! I've got one 300W PSU 
from 1998 and it still works and the fan is almost as quiet as if it was 
new.

Optical drives: Number 1 reason for breakage, I forget to close the tray 
and kick it off! Currently I don't use internal optical drives anymore. 
There's one external dvd burner. I rarely use it. And it's safe from my 
feet on the table :D

Hard drives: these always fail, sooner or later. There's nothing you can 
do except RAID and backups (labs.google.com/papers/disk_failures.pdf). 
I've successfully terminated all (except those in use) hard drives so far 
by using them normally.

Monitors: The CRTs used to break every 3-5 years. Even the high quality 
Sony monitors :-| I've used TFT panels since 2003. The inverter of the 
first 14" TFT broke after 5 years of use. Three others are still working, 
after 1-6 years of use.

Mice: I've always bought Logitech mice. NEVER had any failures. The 
current one is MX 510 (USB). Previous ones used the COM port. The bottom 
of the MX510 shows signs of hardcore use, but the internal parts haven't 
fallen off yet and the LED "eye" works :-D

Fans: If you want reliability, buy fans with ball bearings. They make 
more noise than sleeve bearings. I don't believe in expensive high 
quality fans. Sure, there are differences in the airflow and noise 
levels, but the max reliability won't be any better. The normal PC stores 
don't sell any fans with industrial quality bearings. Like I said before, 
remember to replace the oil http://www.dansdata.com/fanmaint.htm -- I 
still have high quality fans from the 1980s in 24/7 use. The only problem 
is, I couldn't anticipate how much the power consumption grows. The old 
ones are 40-80 mm fans. Now (at least gaming) computers have 120mm or 
140mm or even bigger fans.


Re: DVCS (was Re: Moving to D)

2011-01-12 Thread retard
Wed, 12 Jan 2011 22:46:46 +0100, Ulrik Mikaelsson wrote:

> Wow. The thread that went "Moving to D"->"Problems with
> DMD"->"DVCS"->"WHICH DVCS"->"Linux Problems"->"Driver
> Problems/Manufacturer preferences"->"Cheap VS. Expensive". It's a
> personally observed record of OT threads, I think.
> 
> Anyways, I've refrained from throwing fuel on the thread as long as I
> can, I'll bite:
> 
>> It depends on a number of factors, including the quality of the card
>> and the conditions that it's being used in. I've had video cards die
>> before. I _think_ that it was due to overheating, but I really don't
>> know. It doesn't really matter. The older the part, the more likely it
>> is to break. The cheaper the part, the more likely it is to break.
>> Sure, the lack of moving parts makes it less likely for a video card to
>> die, but it definitely happens. Computer parts don't last forever, and
>> the lower their quality, the less likely it is that they'll last. By no
>> means does that mean that a cheap video card isn't necessarily going to
>> last for years and function just fine, but it is a risk that a cheap
>> card will be too cheap to last.
> "Cheap" in the sense of "less money" isn't the problem. Actually, HW
> that cost more is often high-end HW which creates more heat, which
> _might_ actually shorten the lifetime. On the other hand, low-end HW is
> often less heat-producing, which _might_ make it last longer. The real
> difference lies in what level of HW are sold at which clock-levels, I.E.
> manufacturing control procedures. So an expensive low-end for a hundred
> bucks might easily outlast a cheap high-end alternative for 4 times the
> money.
> 
> Buy quality, not expensive. There is a difference.

Nicely written, I fully agree with you.


Re: DVCS (was Re: Moving to D)

2011-01-12 Thread retard
Wed, 12 Jan 2011 13:22:28 -0800, Jonathan M Davis wrote:

> On Wednesday 12 January 2011 13:11:13 retard wrote:
>> Same thing, can't imagine how a video card could break. The old ones
>> didn't even have massive cooling solutions, the chips didn't even need
>> a heatsink. The only problem is driver support, but on Linux it mainly
>> gets better over the years.
> 
> It depends on a number of factors, including the quality of the card and
> the conditions that it's being used in.

Of course.

> I've had video cards die before.
> I _think_ that it was due to overheating, but I really don't know. It
> doesn't really matter.

Modern GPU and CPU parts are of course getting hotter and hotter. They're 
getting so hot it's a miracle the components such as capacitors nearby 
the cores can handle it. You need better cooling which means even more 
breaking parts.

> The older the part, the more likely it is to break.

Not true. http://en.wikipedia.org/wiki/Bathtub_curve

> The cheaper the part, the more likely it is to break.

That might be true if the part is a power supply or a monitor. However, 
the latest and greatest video cards and CPUs are sold at an extremely 
high price mainly for hardcore gamers (and 3d modelers -- quadro & 
firegl). This is sometimes purely an intellectual property issue, nothing 
to do with physical parts.

For example I've earned several hundred euros by installing soft-mods, 
that is upgraded firmware / drivers. Ever heard of Radeon 9500 -> 9700, 
9800SE -> 9800, and lately 6950 -> 6970 mods? I've also modded one PC 
NVIDIA card to work on Macs (sold at a higher price) and done one Geforce 
-> Quadro mod. You don't touch the parts at all, just flash the ROM. It 
would be a miracle if that improved the physical quality of the parts. It 
does raise the price, though.

Another observation: the target audience of the low end NVIDIA cards are 
usually HTPC and office users. These computers have small cases and 
require low profile cards. The cards have actually *better* multimedia 
features (purevideo) than the high end cards for gamers. These cards are 
built by the same companies as the larger versions (Asus, MSI, Gigabyte, 
and so on). Could it just be that by giving the buyer less physical parts 
and less intellectual property in the form of GPU firmware, they can sell 
at a lower price. 

There are also these cards with the letters "OC" in their name. The 
manufacturer has deliberately overclocked the cards beyond their specs. 
That's actually hurting the reliability but the price is even bigger.


Re: DVCS (was Re: Moving to D)

2011-01-12 Thread retard
Wed, 12 Jan 2011 14:22:59 -0500, Nick Sabalausky wrote:

> "Andrej Mitrovic"  wrote in message
> news:mailman.571.1294806486.4748.digitalmar...@puremagic.com...
>> Notice the smiley face -> :D
>>
>> Yeah I didn't check the price, it's only 30$. But there's no telling if
>> that would work either. Also, dirt cheap video cards are almost
>> certainly going to cause problems. Even if the drivers worked
>> perfectly, a year down the road things will start breaking down. Cheap
>> hardware is cheap for a reason.
> 
> Rediculous. All of the video cards I'm using are ultra-cheap ones that
> are about 10 years old and they all work fine.

There's no reason why they would break. Few months ago I was 
reconfiguring an old server at work which still used two 16-bit 10 
megabit ISA network cards. I fetched a kernel upgrade (2.6.27.something). 
It's a modern kernel which is still maintained and had up-to-date drivers 
for the 20 year old device! Those devices have no moving parts and are 
stored inside EMP & UPS protected strong server cases. How the heck could 
they break?

Same thing, can't imagine how a video card could break. The old ones 
didn't even have massive cooling solutions, the chips didn't even need a 
heatsink. The only problem is driver support, but on Linux it mainly gets 
better over the years.


Re: DVCS (was Re: Moving to D)

2011-01-12 Thread retard
Wed, 12 Jan 2011 19:11:22 +0100, Daniel Gibson wrote:

> Am 12.01.2011 04:02, schrieb Jean Crystof:
>> Walter Bright Wrote:
>>
>>> My mobo is an ASUS M2A-VM. No graphics cards, or any other cards
>>> plugged into it. It's hardly weird or wacky or old (it was new at the
>>> time I bought it to install Ubuntu).
>>
>> ASUS M2A-VM has 690G chipset. Wikipedia says:
>> http://en.wikipedia.org/wiki/AMD_690_chipset_series#690G
>>
>> "AMD recently dropped support for Windows and Linux drivers made for
>> Radeon X1250 graphics integrated in the 690G chipset, stating that
>> users should use the open-source graphics drivers instead. The latest
>> available AMD Linux driver for the 690G chipset is fglrx version 9.3,
>> so all newer Linux distributions using this chipset are unsupported."
>>
>>
> I guess a recent version of the free drivers (as delivered with recent
> Ubuntu releases) still is much better than the one in Walters >2 Years
> old Ubuntu.

Most likely. After all they're fixing more bugs than creating new 
ones. :-) My other guess is, while the open source drivers are far from 
perfect for hardcore gaming, the basic functionality like setting up a 
video mode is getting better. Remember the days you needed to type in all 
internal and external clock frequencies and packed pixel bit counts in 
xorg.conf ?!

> Sure, game performance may not be great, but I guess normal working
> (even in 1920x1200) and watching youtube videos works.

Embedded videos on web pages used to require huge amounts of CPU power 
when you were upscaling them in the fullscreen mode. The reason is that 
Flash only recently starting supporting hardware accelerated videos, on 
***32-bit*** systems equipped with a ***NVIDIA*** card. The same VDPAU 
libraries are used by the native video players.

I tried to accelerate video playback with my Radeon HD 5770, but it 
failed badly. Believe it or not, my 3 Ghz 4-core Core i7 system with 24 
GB of RAM and the fast Radeon HD 5770 was too slow to play 1080p videos @ 
1920x1080 using the open source drivers. Without hardware acceleration 
you need a modern high-end dual-core system or faster to run the video 
assuming the drivers aren't broken. If you only want to watch the youtube 
videos in windowed mode, you still need a 2+ Ghz single-core.

But.. Youtube has switched to HTML5 videos recently. This should take the 
requirements down a notch. Still I wouldn't trust integrated graphics 
that much. They've always been crap.


Re: DVCS (was Re: Moving to D)

2011-01-10 Thread retard
Sat, 08 Jan 2011 12:36:39 -0800, Walter Bright wrote:

> Lutger Blijdestijn wrote:
>> Walter Bright wrote:
>> 
 Looks like meld itself used git as it's repository. I'd be surprised
 if it doesn't work with git. :-)
>>> I use git for other projects, and meld doesn't work with it.
>> 
>> What version are you on? I'm using 1.3.2 and its supports git and
>> mercurial (also committing from inside meld & stuff, I take it this is
>> what you mean with supporting a vcs).
> 
> The one that comes with: sudo apt-get meld
> 
> 1.1.5.1

One thing came to my mind. Unless you're using Ubuntu 8.04 LTS, your 
Ubuntu version isn't supported anymore. They might have already removed 
the package repositories for unsupported versions and that might indeed 
lead to problems with graphics and video players as you said.

The support for desktop 8.04 and 9.10 is also nearing its end (April this 
year). I'd recommend backing up your /home and installing 10.04 LTS or 
10.10 instead.


Re: DVCS (was Re: Moving to D)

2011-01-10 Thread retard
Sat, 08 Jan 2011 14:34:19 -0800, Walter Bright wrote:

> Michel Fortin wrote:
>> I know you had your reasons, but perhaps it's time for you upgrade to a
>> more recent version of Ubuntu? That version is what comes with Hardy
>> Heron (april 2008).
>> 
> 
> I know. The last time I upgraded Ubuntu in place it fd up my system
> so bad I had to wipe the disk and start all over. It still won't play
> videos correctly (the previous Ubuntu worked fine), the rhythmbox music
> player never worked again, it wiped out all my virtual boxes, I had to
> spend hours googling around trying to figure out how to reconfigure the
> display driver so the monitor worked again, etc.
> 
> I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking
> forward to it.

Ubuntu has a menu entry for "restricted drivers". It provides support for 
both ATI/AMD (Radeon 8500 or better, appeared in 1998 or 1999!) and 
NVIDIA cards (Geforce 256 or better, appeared in 1999!) and I think it 
automatically suggests (a pop-up window) correct drivers in the latest 
releases right after the first install.

Intel chips are automatically supported by the open source drivers. VIA 
and S3 may or may not work out of the box. I'm just a bit curious to know 
what GPU you have? If it's some ancient VLB (vesa local bus) or ISA card, 
I can donate $15 for buying one that uses AGP or PCI Express.

Ubuntu doesn't support all video formats out of the box, but the media 
players and browsers automatically suggest loading missing drivers. At 
least in the 3 or 4 latest releases. Maybe the problem isn't the encoder, 
it might be the Linux incompatible web site.

>> Or you could download the latest version from meld's website and
>> compile it yourself.
> 
> Yeah, I could spend an afternoon doing that.

Another one of these jokes? Probably one of the best compiler authors in 
the whole world uses a whole afternoon doing something (compiling a 
program) that total Linux noobs do in less than 30 minutes with the help 
of Google search.


Re: DVCS (was Re: Moving to D)

2011-01-10 Thread retard
Sun, 09 Jan 2011 06:00:21 -0600, Christopher Nicholson-Sauls wrote:

> On 01/08/11 20:18, Walter Bright wrote:
>> Vladimir Panteleev wrote:
>>> On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright
>>>  wrote:
>>>
 Yeah, I could spend an afternoon doing that.
>>>
>>> sudo apt-get build-dep meld
>>> wget
>>> http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar
>>> jxf meld-1.5.0.tar.bz2
>>> cd meld-1.5.0
>>> make
>>> sudo make install
>>>
>>> You're welcome ;)
>>>
>> Thanks, I'll give it a try!
> 
> I say you should consider moving away from *Ubuntu and to something more
> "developer-friendly" such as Gentoo, where the command to install meld
> is just:
> emerge meld
> 
> ...done.  And yes, that's an install from source.  I just did it myself,
> and it took right at one minute.

Gentoo really needs a high-end computer to run fast. FWIW, the same meld 
takes 7 seconds to install on my ubuntu. That includes fetching the 
package from the internet (1-2 seconds). Probably even faster on Arch.


Re: Why Ruby?

2010-12-21 Thread retard
Tue, 21 Dec 2010 19:50:21 +, Bruno Medeiros wrote:

> On 11/12/2010 01:26, Ary Borenszweig wrote:
>> http://vimeo.com/17420638
>>
>> A very interesting talk.
>>
>>
> Whoa.
> 
> Over the last 5 years or so, with surge in popularity of dynamic
> languages like Ruby, Python, etc., I've seen several arguments put forth
> in favor of dynamic typing, and gradually I've always found there was a
> /subtle/ parallel with the arguments and reasons put forth for
> libertarian/anti-authoritarian/anti-corporate ideologies. After seeing
> this talk, I guess it's not so subtle after all... ~_~'
> 
> 
> Let me offer my thoughts on this. I don't think his argument is
> fundamentally rational. And I don't just mean wrong or illogical, I mean
> /irrational/: it is driven by an emotional bias of something not related
> to programmer productivity, which is what the discussion should be
> about. And I think his opinion is somewhat generally representative of
> many dynamic language proponents.
> 
> What I think is happening is this: These people, if and when they
> program on languages with static typing, they get annoyed by some (or
> all) of the aspects of static typing. That's normal so far, but now the
> problem is that while some of this annoyance may be driven from a
> genuine questioning of whether static typing is worth it or not (in
> usefulness and productivity), the rest of the annoyance is instead
> driven by an external emotional factor: if the language doesn't let you
> do something that it could easily let you do, then it is perceived as a
> "restriction of your freedoms". The programmer makes an emotional
> connection to personal issues unrelated to the field of programming.
> Another variant of this emotional response in this situation, and
> probably a much more common one, is not about political ideology, but
> rather the programmer perceives the language restriction to be part of a
> corporate culture that says that programmers are not smart enough to be
> fully trusted, and they need to be controlled to make sure they don't do
> stupid things. In other words the language thinks your are a dumb monkey
> who needs to be kept in line. Java is the poster child for this
> mentality, not only due to the language itself which is perceived to be
> simplistic, but also due to Java's strong association to the corporate
> and enterprise world. In a less extreme view, it is not about
> controlling stupidity, but controlling creativity (a view popular
> amongst "artist"/"painter" programmers). So here the programmers are not
> dumb, but still they need to be kept in line with rules, constraints,
> specifications, strict APIs, etc.. You can't do anything too strange or
> out of the ordinary, and the language is a reflection of that,
> especially with regards to restrictions on dynamic typing (and other
> dynamic stuff like runtime class modification).
> 
> Unfortunately this emotional response is often not fully conscious, or
> at least, it is not clearly expressed to others by the underlying
> programmer. And once this happens, _everything is lost from the
> beginning, in terms of trying to have a sensible debate._ Because from
> now on, these programmers will use half-baked arguments to try to
> justify their preference of dynamic languages. The arguments will be
> half-baked because they will try to argue in the area of effectiveness
> (programmer productivity), yet the main reason they like/dislike the
> language is the attitude of the language creators and/or community.
> (Interestingly enough, an incredibly similar cognitive-dissonance driven
> fallacy happens in discussions of actual political ideologies)
> 
> (Note, I'm not saying this is the case with all programmers, or even
> most, of the proponents of dynamic typing. In particular, nor I am
> saying it's the case with Ary :p )
> 
> 
> BTW, for a while I was quite okay with this talk, because the author
> seemed to make clear what the liked about Ruby was the underlying
> attitude. He mentioned the language design goal of "making the
> programmer happy". He mentioned all those quirks of the community like
> the 'second' property, the 42 one, the cheerleaders at the convention,
> etc.. But then he made those comments about how static typing is there
> in Java and other languages because it is thought programmers are too
> stupid to be able to handle things otherwise (don't remember his exact
> words), or even worse, the comment/point about how many programmers in
> the audience made a bug because of not specifying a type... And from
> that point it was easy to go downhill, and indeed the talk did. Although
> I am happy for him making that explicit parallel with political
> ideology, it illustrates my point very well, even if not all Ruby
> developers would agree with him 100%.
> 
> 
> Note that all of what I said above is a comment about the nature of the
> discussion of static vs. typing. I didn't make an actual argument for or
> against static typing (in 

Re: Why Ruby?

2010-12-19 Thread retard
Sun, 19 Dec 2010 01:24:43 +, JRM wrote:

> On Sat, 18 Dec 2010 16:01:37 -0800, Walter Bright wrote:
> 
>> Simen kjaeraas wrote:
>>> The problem of D's lambda syntax is it is optimized for longer
>>> functions. Usually, the delegates I write are one line long. I cannot
>>> see that this syntax collides with anything at the moment, but feel
>>> free to enlighten me:
>>> 
>>> { => 4; }
>>> { a => 2*a; }
>>> { a, b => a>b; }
>>> { => @ + @; } // turns into { a, b => a + b; }
>>> 
>>> 
>> If size and simplicity of typing are critical, are those really better
>> than:
>> 
>>"a>b"
>> 
>> ?
> 
> I agree that those aren't really much better. This entire discussion
> seems a little odd to me.  People are trying to find a way to more
> easily write lambda's, focusing in particular on single expression
> lambda's.

Have you got any idea what a 'lambda' actually is? It originates from the 
lambda calculus. In lambda calculus the lambda abstraction is something 
that takes a single argument and returns an *expression*. You can argue 
that this is less general than D's delegates, but the fact is that many 
such functions such as sort, filter, map, reduce, ... return an 
expression. Of course the explicit return generates additional syntactic 
bloat.

> In order to support lazy, D already allows an expression to be
> implicitly converted to a delegate returning either void or the type of
> the expression.  This covers the case of lambda's taking no arguments,
> and happens to be shorter than any of the proposed syntaxes.

Sorry, don't remember how this works in D if you actually call the 
function with a delegate that isn't taking any arguments, but if the lazy 
generates another thunk, this doesn't work consistently.

> I think this idea (or something similar) is worth consideration.  It is
> simply a small extension to an already existing feature that would give
> D a terser syntax for lambda's than most of the other languages we've
> been discussing.

So in your opinion D's function literals should only be improved if you 
can somehow outwit existing languages, otherwise it just sounds like a 
stupid idea?


Re: Why Ruby?

2010-12-19 Thread retard
Sat, 18 Dec 2010 16:01:37 -0800, Walter Bright wrote:

> Simen kjaeraas wrote:
>> The problem of D's lambda syntax is it is optimized for longer
>> functions. Usually, the delegates I write are one line long. I cannot
>> see that this syntax collides with anything at the moment, but feel
>> free to enlighten me:
>> 
>> { => 4; }
>> { a => 2*a; }
>> { a, b => a>b; }
>> { => @ + @; } // turns into { a, b => a + b; }
>> 
>> 
> If size and simplicity of typing are critical, are those really better
> than:
> 
>"a>b"

In case you didn't see, two additional problems were also listed earlier 
in this thread:

 - template bloat (different strings generate new instances of the sort 
in the sorting example)
 - symbol visibility problems because of wrong scoping


Re: Why Ruby?

2010-12-19 Thread retard
Sun, 19 Dec 2010 06:08:15 -0500, foobar wrote:

> Walter Bright Wrote:
> 
>> JRM wrote:
>> > you could write:
>> > sort!(@1>@2)(x);
>> [...]
>> > I think this idea (or something similar) is worth consideration.  It
>> > is simply a small extension to an already existing feature that would
>> > give D a terser syntax for lambda's than most of the other languages
>> > we've been discussing.
>> 
>> but:
>> 
>> sort!("a>b")(x);
>> 
>> is just as short! And it already works.
> 
> I think that the issue here is not about syntax as much as it is about
> semantics: As others said, this is equivalent to dynamic language's
> eval() or to D's string mixin and the this raises the question of
> hygiene which sadly has no good solution in D.
> 
> The main concern is this:
> In what context are the symbols 'a' and 'b' evaluated?
> 
> At the moment they cannot be correctly evaluated at the caller context
> and do not allow: sort!("a.foo() > b.bar()")(whatever);
> 
> The bikesheding of the syntax does not address this concern at all.

Two additional problems were also listed earlier in this thread:

 - template bloat (different strings generate new instances of the sort)
 - symbol visibility problems because of wrong scoping


Re: Why Ruby?

2010-12-18 Thread retard
Sat, 18 Dec 2010 19:09:24 +0100, Jacob Carlborg wrote:

> As Nick writes here the Scala/C#-style syntax is one suggestion. There
> are also several other syntaxes available, one just have to look at
> other languages to get ideas. Here's a list of the syntax used by a
> couple of different language, some languages are list more than once
> because they support more than one syntax. I've listed the languages in
> order from, what I think, the least verbose to the most verbose
> lambda/delegate syntax (the number in front of the languages is the
> level of verbose, if two languages are at the same level I think they
> are equally verbose).
> 
> 1 D: foo(writeln(3)); // lazy argument

That's not really equivalent to lambdas. It would be unfair to not 
mention Scala which also supports lazy arguments.

> 1 Scala: foo(_ * _)

This isn't the same. _ * _ is equivalent to (a, b) => a * b

1 Scala: foo(x => x * x)

> 2 C#: foo(x => x * x);
> 3 Scala: foo((x) => x * x)

foo(x => x * x) also works in this case

> 4 Python: foo(lambda x: x * x)
> 5 Ruby: foo { |bar| x * x }

Maybe you meant

foo { |x| x * x }

> 5 Ruby: foo do |x| x * x end
> 6 D: foo((int x) { return x * x; });
> 7 C++1x: foo([](int x){ return x * x; });
> 7 Apple's (Objective)-C(++)
> block extension: foo(^(int x){ return x * x; });
> 8 JavaScript:
> foo(function(x){ return x * x })
> 9 PHP: foo(function ($x) use($fooBar) {
> return $x * $x; }); // "use" is used for explicitly telling what
> variables should be available when the scope is gone.
> 
> Note that I have not listed any functional languages here because I've
> never used one.

For example:

Lambda calculus: λx.x*x
Haskell: \x -> x * x

As you can see, most of the verbosity comes from the fact that lambdas in 
D and C++ contain statements, not a single expression. It's like if-then-
else vs ternary ?:  -- In languages like Scala these are the same built-in 
feature.


Re: emscripten

2010-12-17 Thread retard
Fri, 17 Dec 2010 21:58:06 -0500, Jeff Nowakowski wrote:

> On 12/17/2010 09:18 PM, retard wrote:
>>
>> FWIW, JavaScript still isn't very efficiently supported on many
>> platforms.
> 
> Do you think performance is a problem for a mortgage calculator?
> 
> I think the performance issues of JavaScript are way overblown for the
> majority of use cases. I think the biggest problem is people keeping
> open lots of tabs with crappy JavaScript running from ad farms.

You can test this by running popular google web applications with a 
vanilla installation of Gnome / KDE desktop. Those are the most used 
desktop environments in the Linux land. Both have problems with the 
khtml / webkit browsers provided by the environments. The neverending 
feed page in google reader makes Gnome's browser soon choke. Some events 
in gmail make KDE's browser confused (not responding to further events). 
This is the typical behavior you get with Linux distributions not using 
latest Firefox / Opera / Chromium. I'd consider the fact that you're not 
being able to read your mail quite critical. It's just that the html 5 
developers don't care. Webmails have worked with html 3/4 browsers for 
over 15 years now.


Re: emscripten

2010-12-17 Thread retard
Fri, 17 Dec 2010 20:45:46 -0500, Jeff Nowakowski wrote:

> On 12/16/2010 03:04 PM, Nick Sabalausky wrote:
>>
>> I do make my pages usable both ways and I've found the extra effort to
>> be downright minimal. Unless you're doing things very, very, very
>> wrong, the vast majority of the work in a site is independent of JS vs
>> non-JS.
> 
> For the mortgage calculator, you have to implement the same
> functionality twice, once in JavaScript, and once as a backend
> calculation using request/response server navigation. The technologies
> are very different and it's nearly a complete duplication of work.
> 
>> And besides, no one's ever going to get me to agree with something
>> simply by trying to shame me into it with some idiotic
>> "newer-is-inherently-better", "Oh no! I don't want to be un-trendy!!"
>> line of dumbass sheep-think.
> 
> You're missing the point. 1995 is when JavaScript came out, and you
> couldn't depend on the browser having it. Now it's nearly ubiquitous, so
> there's very little benefit to spend the time making something like a
> mortgage calculator work without JavaScript.

FWIW, JavaScript still isn't very efficiently supported on many 
platforms. Latest IE beta (9), Opera, Chrome/Chromium daily builds, and 
Firefox betas might have a reasonable performance level, but the others 
often don't. We not only have windows and macosx. Many users have Linux 
or BSD or some portable device with integrated web browsers. Not every 
mobile phone is an Android phone or iPhone. At work places the corporate 
policy might prevent upgrades. E.g. I've worked in a company where they 
still use WinXP & IE 6 because new browsers would break expensive 
intranet apps.


Re: emscripten

2010-12-16 Thread retard
Thu, 16 Dec 2010 14:22:01 -0500, Nick Sabalausky wrote:

> "Michael Stover"  wrote in message
> news:mailman.1053.1292506694.21107.digitalmar...@puremagic.com...
>>
>> And CAPTCHAs prove that javascript and browsers are terrible???
>>
>>
> Where are you gettng that? That's not even remotely what he said. He was
> clearly saying that CAPTCHAs and registration are a counter-argument to
> the notion that most webapps are zero-config. Or at least that they're
> not really much better than having to do some basic config.

This guy is trolling here. Any sane person would have already given up. 
This discussion is more or less useless.


Re: Why Ruby?

2010-12-15 Thread retard
Wed, 15 Dec 2010 16:47:01 -0600, Andrei Alexandrescu wrote:

> On 12/15/10 4:42 PM, retard wrote:
>> Wed, 15 Dec 2010 16:33:43 -0600, Andrei Alexandrescu wrote:
>>
>>> On 12/15/10 4:18 PM, retard wrote:
>>>> Wed, 15 Dec 2010 22:23:35 +0100, Jacob Carlborg wrote:
>>>>> Array(1, 2, 3, 4, 5).sortWith(_>   _)
>>>>
>>>> The first instance of _ (from left to right) is replaced with the
>>>> first element of the parameter tuple, the second with second element,
>>>> etc.
>>>>
>>>> This is actually very useful since many lambdas only use 1-2
>>>> parameters. It has its limitations. For example referring to the same
>>>> parameter requires a named parameter or some other hack. Combined
>>>> with Haskell style partial application this allows stuff like:
>>>>
>>>> Array(1, 2, 3, 4, 5).foreach { println }
>>>>
>>>> Array(1, 2, 3, 4, 5).filter(2<)
>>>
>>> For short lambdas I prefer Phobos' convention of using "a" and "b",
>>> e.g. "2<  a" or "a<  b". Since it's a string, "_<  _" would have been
>>> usable with Phobos too but I wouldn't like such a change.
>>
>> I haven't have time to test this (still using D1 + Tango), but these
>> magical 'a' and 'b' make me wonder whether there are any namespace
>> issues. Can you refer to symbols defined in the current module or in
>> the Phobos module the collection function is declared? Do the 'a' and
>> 'b' shadow some other instances of 'a' and 'b'?
> 
> There are no hygiene issues, but lookup is limited to the modules
> included in std.functional. That's goes well with the charter of short
> lambdas - I mean, they are short :o).

Ha, that's the thing I was after. Hopefully it's well documented. No need 
to answer, I can check it myself.


Re: Why Ruby?

2010-12-15 Thread retard
Wed, 15 Dec 2010 16:33:43 -0600, Andrei Alexandrescu wrote:

> On 12/15/10 4:18 PM, retard wrote:
>> Wed, 15 Dec 2010 22:23:35 +0100, Jacob Carlborg wrote:
>>> Array(1, 2, 3, 4, 5).sortWith(_>  _)
>>
>> The first instance of _ (from left to right) is replaced with the first
>> element of the parameter tuple, the second with second element, etc.
>>
>> This is actually very useful since many lambdas only use 1-2
>> parameters. It has its limitations. For example referring to the same
>> parameter requires a named parameter or some other hack. Combined with
>> Haskell style partial application this allows stuff like:
>>
>> Array(1, 2, 3, 4, 5).foreach { println }
>>
>> Array(1, 2, 3, 4, 5).filter(2<)
> 
> For short lambdas I prefer Phobos' convention of using "a" and "b", e.g.
> "2 < a" or "a < b". Since it's a string, "_ < _" would have been usable
> with Phobos too but I wouldn't like such a change.

I haven't have time to test this (still using D1 + Tango), but these 
magical 'a' and 'b' make me wonder whether there are any namespace 
issues. Can you refer to symbols defined in the current module or in the 
Phobos module the collection function is declared? Do the 'a' and 'b' 
shadow some other instances of 'a' and 'b'?


Re: Why Ruby?

2010-12-15 Thread retard
Wed, 15 Dec 2010 22:23:35 +0100, Jacob Carlborg wrote:

> On 2010-12-14 22:04, Nick Sabalausky wrote:
>> "Jacob Carlborg"  wrote in message
>> news:ie8f5f$o6...@digitalmars.com...
>>>
>>> Probably not, but, for example, Scala allows very compact delegate
>>> literals:
>>>
>>> Array(1, 2, 3, 4, 5).select(_>  3).collect(_ * _)
>>>
>>> Or more verbose:
>>>
>>> Array(1, 2, 3, 4, 5).select((x) =>  x>  3).collect((x, y) =>  x * y)
>>>
>>> I'm not 100% sure I that the syntax is correct.
>>>
>>>
>> I'd be surprised if the first one is correct, because that "collect(_ *
>> _)" would seem highly limited (how would you use the same value twice,
>> or use just the second value, or use them in reverse order?).
> 
> I guess for anything more complicated you would have to use the =>
> syntax. BTW, I just verified that this works:
> 
> Array(1, 2, 3, 4, 5).sortWith(_ > _)

The first instance of _ (from left to right) is replaced with the first 
element of the parameter tuple, the second with second element, etc.

This is actually very useful since many lambdas only use 1-2 parameters. 
It has its limitations. For example referring to the same parameter 
requires a named parameter or some other hack. Combined with Haskell 
style partial application this allows stuff like:

Array(1, 2, 3, 4, 5).foreach { println }

Array(1, 2, 3, 4, 5).filter(2 <)


Re: emscripten

2010-12-15 Thread retard
Wed, 15 Dec 2010 13:18:16 -0500, Nick Sabalausky wrote:

> "Andrew Wiley"  wrote in message
> news:mailman.1026.1292433894.21107.digitalmar...@puremagic.com...
>> On Wed, Dec 15, 2010 at 9:37 AM, Adam D. Ruppe
>> wrote:
>>>
>>> And in those rare cases where you are doing a lot of client side work,
>>> it is so
>>> brutally slow that if you start piling other runtimes on top of it,
>>> you'll
>>> often
>>> be left with an unusable mess anyway!
>>
>>
>> Unless you're using the beta of the next IE, the beta of the next
>> Opera, or
>> the current version of Chrome, in which case you'd find that
>> client-side work is becoming more and more feasible. Now, it's not
>> there yet, but when a
>> C-ported-to-Java-compiled-to-Javascript version of Quake 2 can get
>> 30FPS in
>> Google Chrome, I start thinking that performance probably won't be
>> nearly as
>> bad as browsers move forward.
>>
>>
> A game that was designed to run on a 90-133MHz 16-24MB RAM machine (in
> *software* rendering mode), and was frequently able to get framerates in
> the hundreds on sub-500MHz machines (using hardware rendering - with the
> old, old, old 3dfx cards), manages to get *only* 30FPS in JS on a
> multi-GHz multi-core machine using what is clearly hardware rendering
> (on a modern graphics card), and I'm supposed to think that means JS is
> fast? If anything, that's *proof* of how horrid JS is - it turns a
> multi-GHz multi-core into a Pentium ~100MHz. What a joke!

Some points:

 - IIRC the game was further optimized since the first release. The 
requirements went down a bit. Especially the SIMD optimizations allowed 
much lower MHz requirements with software rendering. Nowadays the SIMD 
instructions give even better throughput.

 - Compilers have improved a lot. E.g. auto-vectorization. Requirements 
went down a bit again.

 - the Jake2 version also runs faster because of faster JVMs and better 
OpenGL libraries

 - OTOH resolutions have gone up... but if the game uses hardware 
accelerated opengl canvas, using Javascript instead of C doesn't have 
much effect

 - overall I think the CPU requirements have gone down even though higher 
resolution and expensive graphical effects are more common these days.

Indeed Quake II used to work very fast on Pentium II class hardware with 
Nvidia TNT cards. I think I got over 100 fps @ 1280x1024 over 10 years 
ago. Getting 30 FPS on average now IS a bad joke. The (graphics) hardware 
gets 100% faster every 12-18 months. Even if you make the game twice as 
fast as now, hardcore fps gamers wouldn't find the rate acceptable for 
modern network gaming. Hardcore fps gamers won't also play 13 yo games 
anymore.

I admit that JavaScript is getting faster and faster. However, at some 
point the language will hit the performance wall. Luajit is one of the 
fastest dynamic languages out there. It's still drastically slower than 
statically typed languages. It probably shows has fast JavaScript can 
possibly get in raw computational tasks.

This is all ok for "casual gaming", but if you only get 30 FPS when 
running a 13 yo game, it means you're 15-16 years behind the state of the 
art. OTOH slow software rendered flash applets were already used as a 
platform for casual gaming, HTML5 doesn't change the situation that much. 
Maybe the greatest difference is that HTML5 also runs quite fast on 
Linux. This HTML5 hype also helps them in marketing. After all, it's the 
same old shit in new package.

>> [HTML5, HTML5, HTML5, Chrome, HTML5, HTML5...]
> 
> Yea, *eventually* HTML5 will *improve* a few things...That hardly counts
> as "The web's not a shitty platform!".

It's just 15-16 years behind the state of the art. Not much!


Re: emscripten

2010-12-15 Thread retard
Wed, 15 Dec 2010 12:40:50 -0600, Andrew Wiley wrote:

> The point was that while Javascript is slow, it's getting fast enough
> to be useful. Yes, it's not C. It will never be. But the fact that any
> sort of realtime calculations are possible in it is a breakthrough that
> will be reflected in actual application code. Javascript was not
> designed to be fast, and honestly, it doesn't need to be fast to fill
> it's niche.

I'm not getting this. WHY we should use Javascript/HTML5 applications 
instead. I'm perfectly happy with my existing tools. They work nicely. It 
takes years to develop these applications on top of HTML5. I simply have 
no motivation to use web applications. They have several downsides:

 - you "rent" the app, you don't "own" it anymore
   => which leads to: advertisements, monthly fees
   - this is especially bad if you're already using free as in beer/
speech software
   - this is especially bad ethically if you're writing free software

 - worse privacy (do I want some Mark SuckerBerg to spy on my personal 
life for personal gain)

 - worse security (a networkless local box IS quite safe, if CIA is 
raiding your house every week, you're probably doing something wrong, 
otherwise, buy better locks)

 - worse performance (at least now and in the next few years)

 - worse usability

 - worse reliability (network problems, server problems)

I know the good sides. No need to mention them. In my opinion the 
downsides are still more important when making the decision.


Re: Slides from my ACCU Silicon Valley talk

2010-12-13 Thread retard
Tue, 14 Dec 2010 02:56:45 +0100, Andrej Mitrovic wrote:

> Why do /you/ take it personally?

You've misunderstood. I only wish the discussion was a bit more technical 
and had less to do with opinions and hype. The reason is, a more 
technical approach might solve technical problems in more efficient way. 
But my goal is not to belittle social issues wrt language adoption. 
Surely I understand reddit isn't lambda-the-ultimate.org and I'm glad 
that the Go trolls didn't find the thread (yet). I just find this 
behavior incomprehensible. 

My personal stance on this matter is that I believe a more consistent and 
flexible mechanism for operators would fit D. I'm also a bit more of a 
fan of C++0x concepts than those contraints shown in the slides. I 
haven't really thought how it all would work out, but if the atmosphere 
was more ambitious to this direction, I could participate more. But it 
seems my vision conflicts badly with what D2 has become.

> 
> On 12/14/10, retard  wrote:
>> Mon, 13 Dec 2010 14:44:36 -0500, snk_kid wrote:
>>
>>> Gary Whatmore Wrote:
>>>
>>>> Simen kjaeraas Wrote:
>>>>
>>>> > Walter Bright  wrote:
>>>> >
>>>> > > Andrei Alexandrescu wrote:
>>>> > >> Compared to the talk at Google, I changed one of the "cool
>>>> > >> things" from threading to operator overloading. Didn't manage to
>>>> > >> talk about that - there were a million questions - although I
>>>> > >> think it's a great topic.
>>>> > >>  http://erdani.com/tdpl/2010-12-08-ACCU.pdf
>>>> > >
>>>> > >
>>>> > > Anyone care to do the honors and post this to reddit programming?
>>>> >
>>>> > Done.
>>>> >
>>>> > http://www.reddit.com/r/programming/comments/eklq0/
>> andrei_alexandrescus_talk_at_accu_silicon_valley/
>>>>
>>>> Guys, I made several sockpuppet reddit accounts to mod down the two
>>>> guys >criticising this thread. I recommend everyone to help us
>>>> improve D's publicity by >ignoring these trolls and voting them down.
>>>> It has worked before, too -- reddit >seems to fold the subthreads
>>>> that get too many negative votes. This makes it >look much better
>>>> than it is.
>>>>
>>>>  - G.W.
>>>
>>> That's absolutely pathetic, you're actually doing the community a
>>> disservice.
>>
>> I really don't know what to say. Take a look at
>>
>> 0 points: http://www.reddit.com/r/programming/comments/eklq0/
>> andrei_alexandrescus_talk_at_accu_silicon_valley/c18swbi
>>
>> or
>>
>> -1 points: http://www.reddit.com/r/programming/comments/eklq0/
>> andrei_alexandrescus_talk_at_accu_silicon_valley/c18sz8n
>>
>> These say nothing against D. Why does one take them personally? They
>> are both also highly informative. As far as I can tell, these two
>> comments go much deeper in operator semantics theory than the combined
>> effort of 68 other threads by Walter, Andrei et al. For example the
>> precedence of operators can get problematic when using several
>> libraries from various vendors.
>>
>> Then you have:
>>
>> http://www.reddit.com/r/programming/comments/eklq0/
>> andrei_alexandrescus_talk_at_accu_silicon_valley/c18t1d5
>>
>> "I really like D (2.0) and I wish it would take off."
>>
>> 7 points? WTF? What is the value of this reply? It's a purely
>> subjective opinion and doesn't necessarily even beg for further
>> discussion.
>>



Re: Slides from my ACCU Silicon Valley talk

2010-12-13 Thread retard
Mon, 13 Dec 2010 14:44:36 -0500, snk_kid wrote:

> Gary Whatmore Wrote:
> 
>> Simen kjaeraas Wrote:
>> 
>> > Walter Bright  wrote:
>> > 
>> > > Andrei Alexandrescu wrote:
>> > >> Compared to the talk at Google, I changed one of the "cool things"
>> > >> from threading to operator overloading. Didn't manage to talk
>> > >> about that - there were a million questions - although I think
>> > >> it's a great topic.
>> > >>  http://erdani.com/tdpl/2010-12-08-ACCU.pdf
>> > >
>> > >
>> > > Anyone care to do the honors and post this to reddit programming?
>> > 
>> > Done.
>> > 
>> > http://www.reddit.com/r/programming/comments/eklq0/
andrei_alexandrescus_talk_at_accu_silicon_valley/
>> 
>> Guys, I made several sockpuppet reddit accounts to mod down the two
>> guys >criticising this thread. I recommend everyone to help us improve
>> D's publicity by >ignoring these trolls and voting them down. It has
>> worked before, too -- reddit >seems to fold the subthreads that get too
>> many negative votes. This makes it >look much better than it is.
>> 
>>  - G.W.
> 
> That's absolutely pathetic, you're actually doing the community a
> disservice.

I really don't know what to say. Take a look at

0 points: http://www.reddit.com/r/programming/comments/eklq0/
andrei_alexandrescus_talk_at_accu_silicon_valley/c18swbi

or 

-1 points: http://www.reddit.com/r/programming/comments/eklq0/
andrei_alexandrescus_talk_at_accu_silicon_valley/c18sz8n

These say nothing against D. Why does one take them personally? They are 
both also highly informative. As far as I can tell, these two comments go 
much deeper in operator semantics theory than the combined effort of 68 
other threads by Walter, Andrei et al. For example the precedence of 
operators can get problematic when using several libraries from various 
vendors.

Then you have:

http://www.reddit.com/r/programming/comments/eklq0/
andrei_alexandrescus_talk_at_accu_silicon_valley/c18t1d5

"I really like D (2.0) and I wish it would take off."

7 points? WTF? What is the value of this reply? It's a purely subjective 
opinion and doesn't necessarily even beg for further discussion.


Re: How convince computer teacher

2010-12-13 Thread retard
Mon, 13 Dec 2010 16:45:09 -0500, Austin Hastings wrote:

> On 12/9/2010 11:27 AM, Ddev wrote:
>> hi community,
>> How convince my teacher to go in D ?
>> After talk with my teacher, i do not think D is good because after 10
>> years is not become the big one. she is very skeptical about D. If i
>> could convince my teacher it will be great maybe i will teach to his
>> students :)
> 
> Please don't.

[snip] 

> Bottom line: you'd be wasting your time and your teacher's time. If
> you're still in school, you shouldn't be looking at D at all. You should
> be learning some of the functional languages to stretch your brain, or
> learning some of the popular procedural languages to pad your resume.

(To you who wonder whether he is a troll since he disagrees with you -- 
As far as I can tell, the indentity is real and unique. It just so 
happens that not everyone wants to advocate D blindly)

I agree with you, you have great points there. In my school they used to 
have different computer science programs for theoretical stuff and 
"vocational" engineering studies. The latter mostly used commercial 
Java/.NET/C++ toolchain to do the stuff. The theoretical program used 
easily available languages (Scheme, C, Assembly, Pascal, Java, Haskell, 
Coq). The focus __wasn't__ on languages. The main goals were:

1) basic programming: how to program a computer, how to use abstractions, 
how to program in the small using procedures, classes, and objects. how 
to use build tools, IDEs, editors, command line (Scheme | Pascal | Java)

2) data structures and algorithms: time and space complexity, graphs, 
trees, lists, arrays, dynamic programming, maximum flow, divide and 
conquer, parallel programming, and so on. (Pascal like pseudo language 
was used, the idea was to provide a pure language with small amount of 
features and hidden costs)

3) low level programming: how the CPU works, how the memory system works, 
how binaries are constructed (Assembly)

4) programming languages: declarative, functional, stack, concurrent, 
logic, imperative, OOP, scripting (Haskell, Java, Perl, C, ... __oldest 
possible languages of that paradigm__)

5) practical programming with libraries: audio programming, graphics 
programming, AI, network programming, ... (mostly C/C++/Java)

6) functional languages and theorem proving (Haskell, Coq, ...)

7) software development practices: how to program/manage in large projects 
(version control, testing, project management, entrepreneurship, computer 
systems) (mostly Java/PHP/C++ in project works)

8) operating systems, networks, cryptography, compilers, mathematics: I 
don't think these were demonstrated with any languages.

I fail to see where D fits in.

1) You would have to use a subset of D for basic programming. Providing 
too much information is harmful. Currently even SafeD is too big for this 
task. And the specification of SafeD isn't available anywhere. The lack 
of a good 64-bit compiler is also a great problem. It's intellectual 
dishonesty to claim that D is ready for this task. It could be! But it 
isn't now and not in the near future.

2) The idea was to avoid language dependencies. You had to explicitly 
write all algorithms and data structures, not use built-in ones. The 
pseudo language was really simple, similar to Pascal, but with some 
useful extensions. Many algorithmics textbooks also use these kind of 
languages. It made it really easy to study several of these books during 
the courses. If I had used D, it would be harder to read 40+ years worth 
of algorithm & data structure books.

3) I think the pure assembler is much useful than D's inline assembler 
for this stuff. We also learned how object file formats work (sections 
etc.) and how you can control the resulting binaries with special 
features. D doesn't really help here, does it? How is it better than a 
portable, fully tested production ready 100% free/open/portable assembler.

4) This was mostly a list of language history courses. Very interesting. 
We did not prove how D is much better than everything else. We studied 
tens of languages. The focus wasn't on practical language skills. We only 
learned how features like objects and expections and functional language 
thunks etc. are implemented (not in a single language, but in each one of 
them, also the bad implementations). How features can be combined (also 
the bad choices). How many different paradigms there are. What the world 
looked like in 1970. Really, why is D better than any of these languages? 
How would you built the material around D?

5) Where are the libraries? Where is the good documentation? One example: 
http://www.processing.org/ - does D really have something as good? http://
lwjgl.org/ - does D2 have an up-to-date library similar to this? Need 
more examples? http://www.springsource.org/ http://www.hibernate.org/ 
Where are the application servers, the web server plugins? Competitive 
XML parsers? AI libraries? IDE integration? The sad truth i

Re: Why Ruby?

2010-12-13 Thread retard
Mon, 13 Dec 2010 14:23:24 -0500, Nick Sabalausky wrote:

> "Ary Borenszweig"  wrote in message
> news:ie5r0q$86...@digitalmars.com...
>> This is how:
>>
>> http://www.youtube.com/asterite#p/u/10/oAhrFQVnsrY
> 
> Cool :)
> 
> I don't use Eclipse because there's a lot I don't like about it for
> normal day-to-day coding, but I may install it with Descent and/or that
> other newer D plugin just as a secondary tool for doing that neat stuff.

Beware, the startup time might be unacceptable.


Re: why a part of D community do not want go to D2 ?

2010-12-02 Thread retard
Thu, 02 Dec 2010 13:21:57 +, Bruno Medeiros wrote:

> On 30/11/2010 19:02, Stewart Gordon wrote:
>> On 30/11/2010 14:13, Bruno Medeiros wrote:
>>> On 30/11/2010 14:08, Stewart Gordon wrote:
 On 29/11/2010 18:30, Bruno Medeiros wrote:
>> 
> Did you mean D2, in "Sick of waiting for D1 to be finished." ?

 I don't know what you mean
>> 
>>> I meant: did you mean "Sick of waiting for D2 to be finished." instead
>>> of "Sick of waiting for D1 to be finished." ? Otherwise I don't quite
>>> get it, D1 is quite stable (as a language), it's D2 that is getting a
>>> lot of changes.
>>
>> I guess it was really a question of why you meant it rather than what
>> you meant.
>>
>> D1 may be "quite stable", but that's very different from "finished".
>> http://www.digitalmars.com/d/archives/digitalmars/D/
When_will_D1_be_finished_89749.html
>>
>> http://d.puremagic.com/issues/show_bug.cgi?id=677
>>
>> The point is that D1 should have been finished ages ago. Development of
>> D2 has detracted from this, and so people are fighting against this by
>> not supporting the D2 project.
>>
>> Stewart.
> 
> Given what you meant by finished (consistently defined language, etc.),
> then yeah, D1 isn't really finished. But what do you mean "D1 should
> have been finished ages ago"? As far as I know, Walter never expressed
> the intention of making D1 "finished", that is, to flesh out and
> formalize the language spec. Rather, the creation of D2 was to make D1
> "stable" (to not introduce backwards incompatible changes to the
> language, and to reduce bugs in the D1 compiler).
> 
> If you mean "should" as in, that should have been the intention, well,
> that's arguable. If one wanted at that point for D to stop evolving (in
> a non backwards-compatible way), then yes, you'd want the main focus of
> attention to be D1, and in finishing it. But that wasn't the desire with
> many (probably most) in the community, including Walter himself, so D2
> became the main focus of development. And with this decision, fleshing
> out the D1 spec was never going to be important (as in, important enough
> to dedicate time to it in a significant way).

Even if Walter refused to work on D1 anymore, some believe that the 
language should be finished in any case. This means that the community 
continues the work (writing specs, developing build tools etc.) Why is 
this surprising? This has happened with almost all languages. It's open 
source, you're free to work on it. That's the price you pay. You can't 
force people to work on D2 instead.


Re: D on Wikipedia [Was: Re: Setting the stack size]

2010-12-02 Thread retard
Thu, 02 Dec 2010 12:07:19 +0200, so wrote:

>> I'd introduce the templates using code written in C++ and then list the
>> differences between C++ and D. After all, C++ and C++ TMP are widely
>> known. Even I have few books of them in my bookshelf and ps/pdf papers
>> discussing C++ TMP. There aren't any books or peer reviewed articles
>> about D's metaprogramming, right? Prioritizing D over C++ doesn't make
>> sense, the citations should emphasize notable relevant sources.
> 
> Someone in some third world country proved Goldbach's conjecture. Proof
> is there and accepted by everyone that understand what/how he does it.
> What are you going to do as the all mighty objective wikipedia?

Agreed, I don't like some of the Wikipedia's policies or editors, but 
this isn't the case now. D came a bit late to the party - many books were 
already written. The D's documentation doesn't discuss metaprogramming in 
general very extensively. Even when you just want to cite _something_, 
the C++ literature isn't that bad. The same information implemented in D 
is scattered around the net in newsgroup articles, dr dobbs journal, 
Bartosz's blog, TDPL, and so on. You don't have a single authoritative, 
peer reviewed source about D metaprogramming. There's more to it than 
langugage syntax and semantics.

> 
>> It clearly seems that both C++ and D communities think they invented
>> the term CTFE. In C++ the functions are "meta-functions" (templates)
>> [1], in D
>> "ordinary" functions. But the same shit comes with a different name in
>> other languages. It's essentially the same concept of metaprogramming.
>>
>> [1] http://www.amazon.com/Template-Metaprogramming-Concepts-Techniques-
>> Beyond/dp/0321227255
> 
> CTFE in essance meta-programming i agree, but comparing to others like
> comparing a tree to a forest.

Like I said, the articles would need a review. The particular page is 
already full of unrelated text.

> And no one here claimed D is the inventor of meta-programming.

Bearophile argued that the Wikipedia in general dismisses D harsly due to 
political reasons. I don't find this true. And like I said, it's not a 
competition. It's not a magazine where you can post your ads, it's an 
encyclopedia. More text about D isn't better.

I find this D language advocacy in Wikipedia disgusting - clearly, you 
should document notable features of D, but the main objective cannot be 
as much visibility as possible. Some of the related programming articles 
try to be generic, language agnostic. There's a language independent 
introduction and examples in various languages. And when there's an 
examples section, there should be a balance between the languages, e.g. 
one or two examples per language, not 1 example in other languages and 
100 in D, because D has so much more features. I hate this kind of 
desperate pushing. What the heck we are, a religious cult?

He (or someone else) previously complained that the language shootout guy 
doesn't include D in the test so he must he D because of some  
reason. I think that if a language has real technical merits, this kind 
of worrying about the public image is silly. If you worry about D's 
notability, write more articles about D and more code in D.


Re: D on Wikipedia [Was: Re: Setting the stack size]

2010-12-02 Thread retard
Thu, 02 Dec 2010 11:10:33 +0200, so wrote:

>> http://en.wikipedia.org/w/index.php?
>> title=Template_metaprogramming&diff=64616972&oldid=64616688
>>
>> The discussion page mentions it doesn't add any value and I can't
>> disagree.
> 
> They might be clueless to say that, but you?

I agree static-if, alias parameters, and the other extensions are worth 
mentioning, but syntactical changes more or less aren't.

I'd introduce the templates using code written in C++ and then list the 
differences between C++ and D. After all, C++ and C++ TMP are widely 
known. Even I have few books of them in my bookshelf and ps/pdf papers 
discussing C++ TMP. There aren't any books or peer reviewed articles 
about D's metaprogramming, right? Prioritizing D over C++ doesn't make 
sense, the citations should emphasize notable relevant sources.

> 
>> Resists? You weren't able to fill it with D propaganda? It already
>> lists the DigitalMars pages as only references. And provides 2/3
>> examples in D. What else should it do?
> 
> What propaganda are you talking about? If they are not some populist
> pricks first thing you would see on that page would be D.

It clearly seems that both C++ and D communities think they invented the 
term CTFE. In C++ the functions are "meta-functions" (templates) [1], in D 
"ordinary" functions. But the same shit comes with a different name in 
other languages. It's essentially the same concept of metaprogramming.

[1] http://www.amazon.com/Template-Metaprogramming-Concepts-Techniques-
Beyond/dp/0321227255


Re: D on Wikipedia [Was: Re: Setting the stack size]

2010-12-02 Thread retard
Wed, 01 Dec 2010 20:52:47 -0500, bearophile wrote:

>> On Windows with DMD this is how to set the max stack size to about 1.5
>> GB of the "test.d" module: dmd -L/STACK:15 test.d
> 
> D is good for allowing to add the last values to the results table for n
> up to 25: http://en.wikipedia.org/wiki/Man_or_boy_test The reference to
> D was later removed by someone, of course.

Nothing D specific really, they argued that the article should remain 
clean since the test was designed for *Algol*, not D. It's not a language 
competition, people only want to know what the 'Man or boy test' is. By 
your logic, all those programming articles should include 500+ 
implementations of the algorithm in various languages to avoid any kind 
of discrimination. It's a general purpose encyclopedia, not a language 
competition, understand that? Write your competition code to sites like 
rosettacode.

The implementations are here:

http://en.wikipedia.org/wiki/Wikipedia_talk:Articles_for_creation/
Submissions/Man_or_boy_test_implementations

They haven't yet decided whether they're worth a new article.

> They have even removed D
> examples from the template metaprogramming page, etc.

etc. ? What else?

The generic programming constructs of D have already been discussed here:

http://en.wikipedia.org/wiki/Generic_programming#Templates_in_D

Repeating the same shit provides little additional value IMHO. I think 
the whole template metaprogramming article is redundant and all the 
metaprogramming articles should have a better organization.

The particular D code that was removed was:

http://en.wikipedia.org/w/index.php?
title=Template_metaprogramming&diff=64616972&oldid=64616688

The discussion page mentions it doesn't add any value and I can't 
disagree.

The article should really go through review. It also discusses static 
polymorphism which isn't only related to templates. CRTP is also possible 
in Java, C#, Scala etc. It should be removed from that page. It actually 
already has a new page:

http://en.wikipedia.org/wiki/Curiously_recurring_template_pattern

> The page about CTFE resists still:
> http://en.wikipedia.org/wiki/Compile_time_function_execution

Resists? You weren't able to fill it with D propaganda? It already lists 
the DigitalMars pages as only references. And provides 2/3 examples in D. 
What else should it do?

> Wikipedia
> looks like a fair place based on rules and laws, but in truth a lot of
> its contents are determined by politics.

You aren't helping that with that FUD.

> If there are enough people
> interested in keeping a page/topic alive, then it survives.

Notability guidelines.

> So you are
> able to find many page about single Pokemon characters (some of them are
> cute, but they cultural importance is not huge), but no pages (because
> they have deleted it) about some useful software.

But pikamen are notable!

> 
> Bye,
> bearophile



Re: D and multicore

2010-11-13 Thread retard
Not really an answer to your questions, but according to my google reader 
this appeared very few minutes ago

http://lambda-the-ultimate.org/node/4134

I don't think we've heard the last word on this domain of programming.


Re: One year of Go

2010-11-13 Thread retard
Sat, 13 Nov 2010 08:27:04 -0500, bearophile wrote:

> retard:
> 
>> Any links to relevant research?
> 
> If your JavaScript function ends with this, what kind of errors or
> return value does it generate?
> 
> return
> 2 + 2;
> 
> Found in this thread:
> http://stackoverflow.com/questions/1995113/strangest-language-feature

You also mentioned Scala. In Scala you need to write the return type to 
the function signature when explicitly using 'return'. The only case 
where this can fail in Scala is when the return type is Unit. Would be 
strange to try to return 2+2 in that case.

Andrei also mentioned this:

> method fun() { 42 }
>   to return an integer, and
> method fun() { 42; }
>   to return void.

in Scala both

> def fun = { 42; }
> def fun = { 42 }

return the same function.

The only cases in Scala when the expression fails is when you write a 
multiline expression and forget to surround the expression with braces or 
parentheses.


Re: One year of Go

2010-11-13 Thread retard
Sat, 13 Nov 2010 10:48:00 +, Russel Winder wrote:

> On Sat, 2010-11-13 at 08:51 +0000, retard wrote: [ . . . ]
>> There's also the software transactional memory technology.
> 
> I am ambivalent about STM.  Haskell has it, Clojure has it, Intel have a
> variant for C and C++ but are trying to quietly ignore it.  Sun even
> tried to put hardware support for transactional memory into a chip --
> but the sale of Sun to Oracle has terminated that work.
> 
> On the one hand STM is just a sticking plaster trying to allow shared
> memory multithreading to work as though there was no need for
> synchronization and care on the part of the programmer.   On the other
> hand it makes shared-memory multithreading less full of locks,
> semaphores and monitors.
> 
> All in all, unless STM gets picked up and widely used in C, C++, Java
> and Scala -- also D of course :-) -- I don't see it going anywhere.

Certainly. But it does solve the problem in many cases, right? At least 
I'm willing to use technologies that solve the problem. The popularity 
doesn't matter that much.

The fact that the Sun's project was terminated might have more to do with 
politics and how it could bring profit to Oracle. Many good projects have 
been abandoned. The STM causes some overhead and the hardware could be 
used to mitigate that.


Re: One year of Go

2010-11-13 Thread retard
Sat, 13 Nov 2010 07:53:14 +, Russel Winder wrote:

> On Fri, 2010-11-12 at 15:07 -0500, Jeff Nowakowski wrote: [ . . . ]
>> The lack of generics and dangerous concurrency are much bigger issues.
>> If D can actually be shown to be a useful concurrent language, instead
>> of the buggy and incomplete mess it is now, then it might have
>> something to crow about.
> 
> What do you see as wrong with the Go model for concurrency?
> 
> I find the process/message-passing approach infinitely easier than
> shared-memory multithreading with all its needs for locks, monitors,
> semaphores or lock-free programming.  True operating systems will need
> these latter techniques, but surely they are operating system level ones
> and should never have to appear in application code?

There's also the software transactional memory technology.


Re: One year of Go

2010-11-13 Thread retard
Fri, 12 Nov 2010 12:44:37 -0500, bearophile wrote:

> Russel Winder:
> 
>> I have to say I quite like not having to have semicolon statement
>> terminators.
> 
> I too don't like to add the semicolon at the end of lines, I like to
> write Python code that doesn't need them, **but in some languages 
(Scala, JavaScript) this has caused more problems than it solves.**

Any links to relevant research?


Re: The D Scripting Language

2010-11-13 Thread retard
Fri, 12 Nov 2010 23:01:24 +0600, Alexander Malakhov wrote:

> Gary Whatmore  писал(а) в своём письме Thu, 11 Nov 2010
> 20:07:35 +0600:
> 
>> Alexander Malakhov Wrote:
>>> ...
>>> Maybe it would be better to just make rdmd to surround source code
>>> with:
>>>
>>> //- rdmd generated text BEGIN
>>> public import std.stdio, ...
>>>
>>> void main( string[] args ){
>>> //- rdmd generated text END
>>>
>>> // programmer's code
>>> }
>>>
>>> in cases when rdmd detects there is no main()
>>
>> No, it could do that in all cases. D supports nested declarations. This
>> is how the other languages do this. It would improve the score a lot.
>> Did TDPL talk script programming? We can still change this radically
>> without breaking D2 - thank god the specification is informal and
>> incomplete.
> 
> Then you have 2 issues:
> 
> void main(string[] args){
> 
>   import std.stdio; // 1. will not compile void main(string[] args){
>   writeln("hello");
>   }
> 
>   main(args); // 2. this should be appended, hence anyway rdmd 
should
> analyze
>   //if there is main()
> }

I don't have any opinion of this, but the 1) point make me ask, why 
imports can't be used inside methods just like in Scala. There's no 
technical reason other than "this adds bugs!" - at least no scientific 
research can prove this since Scala hasn't been in wide use that long.



Re: One year of Go

2010-11-13 Thread retard
Fri, 12 Nov 2010 11:54:35 -0500, Jeff Nowakowski wrote:

> On 11/12/2010 11:29 AM, Sean Kelly wrote:
>> To me, what they're saying is that their syntax is broken and so it
>> forces a convention upon the users to deal with the issue.  I know this
>> is just a bike shed issue, but seeing something like this in the
>> beginning of the tutorial makes it difficult for me to take them
>> seriously.
> 
> Yes, semicolons are a bike shed issue, and dismissing the whole language
> because of that is petty. Walter has made a point in the past about
> people who will look for one reason to dismiss something.

That argument of course only applies when looking at D. Walter thinks all 
other competitive languages are pure shit.


Re: why a part of D community do not want go to D2 ?

2010-11-11 Thread retard
Thu, 11 Nov 2010 23:59:36 +0100, spir wrote:

> (3) most texts we deal with
> today only hold common characters that have a single-code
> representation. So that everybody plays with strings as if (1 code <-->
> 1 char).

That might be true for many americans. But even then the single byte 
can't express all characters you need in everyday communication. There 
are countless people with é or ë or ü in their last name. ” and “ are 
probably not among the first 128-256 codes. Using e instead of ë or é 
might work to some extent, but ü and u are pronounced differently. Some 
use ue instead.


Re: Thoughts on parallel programming?

2010-11-11 Thread retard
Thu, 11 Nov 2010 16:32:03 -0500, bearophile wrote:

> Walter:
> 
>> Yup. I am bemused by the efforts put into analyzing loops so that they
>> can (by the compiler) be re-written into a higher level construct, and
>> then the higher level construct is compiled.
>> 
>> It just is backwards what the compiler should be doing. The high level
>> construct is what the programmer should be writing. It shouldn't be
>> something the compiler reconstructs from low level source code.
> 
> I agree a lot. The language has to offer means to express all the
> semantics and constraints, that the arrays are disjointed, that the
> operations done on them are pure or not pure, that the operations are
> not pure but determined only by a small window in the arrays, and so on
> and on. And then the compiler has to optimize the code according to the
> presence of SIMD registers, multi-cores, etc. This maybe is not enough
> for max performance applications, but in most situations it's plenty
> enough. (Incidentally, this is a lot what the Chapel language does (and
> D doesn't), and what I have explained in two past posts about Chapel,
> that were mostly ignored.)

How does the Chapel work when I need to sort data (just basic quicksort 
on 12 cores, for instance) or e.g. compile many files in parallel or 
encode xvid? What is the content of the array with xvid files?


Re: Thoughts on parallel programming?

2010-11-11 Thread retard
Thu, 11 Nov 2010 20:01:09 +, retard wrote:

> in CPUs the
> problems with programmability are slowing things down and many laptops
> are still dual-core despite multiple cores are more energy efficient
> than higher GHz and my home PC has 8 virtual cores in a single CPU.

At least it seems so to me. My last 1 and 2 core systems had a TDP of 65 
and 105W. Now it's 130W, the next gen have 12 cores and 130W TDP.

So I currently have 8 CPU cores and 480 GPU cores. Unfortunately many 
open source applications don't use the GPU (maybe OpenGL 1.0 but usually 
software rendering. The gpu accelerated desktops are still buggy and 
crash prone) and are single threaded. Even some heavier tasks like video 
encoding uses cores very inefficiently. Would MPI help?


Re: Thoughts on parallel programming?

2010-11-11 Thread retard
Thu, 11 Nov 2010 19:41:56 +, Russel Winder wrote:

> On Thu, 2010-11-11 at 15:16 +0100, Fawzi Mohamed wrote: [ . . . ]
>> on this I am not so sure, heterogeneous clusters are more difficult to
>> program, and GPU & co are slowly becoming more and more general
>> purpose. Being able to take advantage of those is useful, but I am not
>> convinced they are necessarily the future.
> 
> The Intel roadmap is for processor chips that have a number of cores
> with different architectures.  Heterogeneity is not going going to be a
> choice, it is going to be an imposition.  And this is at bus level, not
> at cluster level.
> 
> [ . . . ]
>> yes many core is the future I agree on this, and also that distributed
>> approach is the only way to scale to a really large number of
>> processors.
>> Bud distributed systems *are* more complex, so I think that for the
>> foreseeable future one will have a hybrid approach.
> 
> Hybrid is what I am saying is the future whether we like it or not.  SMP
> as the whole system is the past.
> 
> I disagree that distributed systems are more complex per se.  I suspect
> comments are getting so general here that anything anyone writes can be
> seen as both true and false simultaneously.  My perception is that
> shared memory multithreading is less and less a tool that applications
> programmers should be thinking in terms of.  Multiple processes with an
> hierarchy of communications costs is the overarching architecture with
> each process potentially being SMP or CSP or . . .
> 
>> again not sure the situation is as dire as you paint it, Linux does
>> quite well in the HPC field... but I agree that to be the ideal OS for
>> these architectures it will need more changes.
> 
> The Linux driver architecture is already creaking at the seams, it
> implies a central monolithic approach to operating system.  This falls
> down in a multiprocessor shared memory context.  The fact that the Top
> 500 generally use Linux is because it is the least worst option.  M$
> despite throwing large amounts of money at the problem, and indeed
> bought some very high profile names to try and do something about the
> lack of traction, have failed to make any headway in the HPC operating
> system stakes.  Do you want to have to run a virus checker on your HPC
> system?
> 
> My gut reaction is that we are going to see a rise of hypervisors as per
> Tilera chips, at least in the short to medium term, simply as a bridge
> from the now OSes to the future.  My guess is that L4 microkernels
> and/or nanokernels, exokernels, etc. will find a central place in future
> systems.  The problem to be solved is ensuring that the appropriate ABI
> is available on the appropriate core at the appropriate time.  Mobility
> of ABI is the critical factor here.
> 
> [ . . . ]
>> Whole array operation are useful, and when possible one gains much
>> using them, unfortunately not all problems can be reduced to few large
>> array operations, data parallel languages are not the main type of
>> language for these reasons.
> 
> Agreed.  My point was that in 1960s code people explicitly handled array
> operations using do loops because they had to.  Nowadays such code is
> anathema to efficient execution.  My complaint here is that people have
> put effort into compiler technology instead of rewriting the codes in a
> better language and/or idiom.  Clearly whole array operations only apply
> to algorithms that involve arrays!
> 
> [ . . . ]
>> well whole array operations are a generalization of the SPMD approach,
>> so I this sense you said that that kind of approach will have a future
>> (but with a more difficult optimization as the hardware is more
>> complex.
> 
> I guess this is where the PGAS people are challenging things.
> Applications can be couched in terms of array algorithms which can be
> scattered across distributed memory systems.  Inappropriate operations
> lead to huge inefficiencies, but handles correctly, code runs very fast.
> 
>> About MPI I think that many don't see what MPI really does, mpi offers
>> a simplified parallel model.
>> The main weakness of this model is that it assumes some kind of
>> reliability, but then it offers
>> a clear computational model with processors ordered in a linear of
>> higher dimensional structure and efficient collective communication
>> primitives.
>> Yes MPI is not the right choice for all problems, but when usable it is
>> very powerful, often superior to the alternatives, and programming with
>> it is *simpler* than thinking about a generic distributed system. So I
>> think that for problems that are not trivially parallel, or easily
>> parallelizable MPI will remain as the best choice.
> 
> I guess my main irritant with MPI is that I have to run the same
> executable on every node and, perhaps more importantly, the message
> passing structure is founded on Fortran primitive data types.  OK so you
> can hack up some element of abstraction so as to send complex messages,
> bu

Re: why a part of D community do not want go to D2 ?

2010-11-10 Thread retard
Wed, 10 Nov 2010 11:56:18 +0100, Jacob Carlborg wrote:

> On 2010-11-10 00:00, Walter Bright wrote:
>> I agree. The reasons for the Tango split long ago, whatever the merit
>> of those reasons was, have long since passed. Producing another
>> incompatible split with D2 will not be of an advantage to anyone, and
>> will just give people reasons not to use D at all.
>>
>> Jacob has recently decided to help out with improvements to druntime; I
>> take that as a very welcome sign towards ending the differences.
> 
> I don't want to increase any separation in the D community and would
> hope peoeple could agree more. I have no problems what so ever
> contributing both to Tango and Phobos/druntime. And I'm happy to license
> any of my code to whatever license would be need for a give D project.

A dual licensing scheme for all code might help a bit (since both parties 
refuse to switch licensing). There are also

 - stylistic issues (OOP style structured Tango vs quick'n'dirty Phobos 
API) -> this causes annoying technical incompatibilities

 - psychological issues (Tango's charismatic leaders vs dull politically 
correct office persons and almost anynomous lone coders porting Boost 
code written in other languages). I believe strong personalities like Jon 
Harrop and Paul Graham actually have an overall positive effect. It's not 
a big secret that Andrei has boosted D's adoption quite a bit - this has 
more to do with the strong personality than technical issues.

 - project management issues (Tango uses trac heavily and the leaders 
have modern project management skills, Phobos developers have developed a 
new inefficient ad-hoc software process model without the big picture 
'planning' phase and without any communication between the team and the 
product owner)

 - platform issues (not everyone agrees D2 is a perfect upgrade route - 
how is this even surprising? Look at the number of people *not* using D, 
it shouldn't be a surprise that there are people who dislike D2, but like 
D1)

 - an axe fight between some key persons. I believe this can be solved if 
there weren't those other annoying problems.

These are all my subjective opinions. Feel free to throw the first rock, 
after all I'm just a stupid troll.

For me the technical issues have the greatest priority. If I want a full 
flexible Java style stream I/O interface and these kind of things, there's 
no way in hell I'll let you shove the Phobos style ideology down my 
throat. I'd have to create a "PhoTango" wrapper to actually use these.

The political issues aren't that interesting. If I'm coding in Java or 
C#, I don't even know the names of the stdlib developers. Maybe Doug Lea. 
But he left Oracle for political reasons..


Re: null [re: spec#]

2010-11-07 Thread retard
Sun, 07 Nov 2010 14:09:01 -0800, Walter Bright wrote:

> Simen kjaeraas wrote:
>> You misunderstand. The idea is this:
>> 
>> void foo( ) {
>>   Object p;
>>   if ( m ) {
>> p = new Object( );
>> p.DoSomethingThatNeedsToBeDoneNow( );
>>   }
>>   // 20 lines of code here
>>   if ( m ) {
>> p.doSomethingWeird( dataFromAbove );
>>   }
>> }
> 
> You're right, the real cases where this kind of thing occurs are much
> more complex. I just posted the thing boiled down.
> 
> And, of course, there's always a way to refactor the code to eliminate
> the spurious error message. But sometimes the result is as ugly as
> Pascal's efforts to prove you really don't need a 'break' statement in a
> loop.
> 
> The real problem with the spurious errors is that then people will put
> in an initialization "just to shut the compiler up." Time passes, and
> the next guy is looking at the code and wonders why x is being
> initialized to a value that is apparently never used, or worse, is
> initialized to some bogus value randomly picked by the long-retired
> programmer. I've seen code reviewers losing a lot of time on this issue.

That's why we have immutable variables. They force you to think what to 
put in the variables. A lot of cases like the one above would be solved 
if if-then-else was an functional expression instead of a void returning 
statement. C/C++/D has the ternary ?: but the syntax is obfuscated.

Object p = if (m) {
  ...
  foo;
} else {
  ...
  bar;
}

instead of

Object p;
if (m) {
  ...
  p = foo;
} else {
  ...
  p = bar;
}

There are even cases where the former can be const. The latter one has to 
be mutable in any case.


Re: Can non-nullable references be implemented as a library?

2010-11-07 Thread retard
Sun, 07 Nov 2010 17:06:12 -0500, bearophile wrote:

> retard:
> 
>> There are these DIPs in wiki4d. Were they useful? At least it seems
>> that this thread is leading nowhere. Half of the people don't know what
>> non- nullable means. It's hard to trust this process when it seems to
>> go nowhere. No one wants to validate the design decisions.
> 
> If a language feature is too much complex to understand&design for the
> community of people that use the language, then it may be better to not
> add that feature to the language. Maybe nonnullabile types are too much
> complex to design for D. Even D2 immutability and D1 type system seem
> borderline to the max complexity of things that may be added to D. The
> design of the module system and immutability have some important holes
> still.

I bet even the basic class/interface/exception system of D is too complex 
to explain for some members of the audience. You can't assume that the 
same people who *use* the language can/want to understand the 
implementation of the features. Of course it's benefical to understand 
how everything works, but in some practical tasks 1) you write some high 
level code 2) compile it 3) analyze the output (the generated binary) and 
4) perhaps hand optimize some parts. You don't need to understand what 
the compiler actually did in all phases.


Re: Can non-nullable references be implemented as a library?

2010-11-07 Thread retard
Sun, 07 Nov 2010 19:39:09 +0200, so wrote:

>> Andrei's stance is, either a library addon or ship D without that
>> feature. D's library already contains both tuples and algebraic data
>> types. They're simple to use, almost like in Python. The reason for
>> library addons isn't that builtin features make less sense, the reason
>> is that TDPL is already out and we can't improve the language in any
>> radical way.
> 
> Lets talk about solution in this thread more than politics, politics
> "never" improve anything.

There was this other thread here -- "why a part of d community do not 
want to go to d2?"

One reason is, there's no good process for handling these feature 
proposals. Walter attends useless bikeshed discussions and spreads 
misinformation about things he doesn't get, Andrei has excellent 
knowledge of languages but he often prefers staying in the background.

There are these DIPs in wiki4d. Were they useful? At least it seems that 
this thread is leading nowhere. Half of the people don't know what non-
nullable means. It's hard to trust this process when it seems to go 
nowhere. No one wants to validate the design decisions.


Re: null [re: spec#]

2010-11-07 Thread retard
Sun, 07 Nov 2010 01:54:24 -0500, Nick Sabalausky wrote:

> "Nick Sabalausky"  wrote in message
> news:ib5ht0$2uf...@digitalmars.com...
>> "Walter Bright"  wrote in message
>> news:ib5bue$2ld...@digitalmars.com...
>>> Jonathan M Davis wrote:
 Going C# or Java's route forces the programmer to initialize
 variables even in cases where they know that it's not necessary
 (which is annoying but may or may not be worth it),
>>>
>>> Correct. It's not that doing flow analysis is hard, it's that it's
>>> impossible to do it correctly. So you wind up with wishy-washy
>>> messages that p "might not" be initialized, which is what the Java
>>> compiler does for this:
>>>
>>>   class A
>>>   {
>>>public void foo()
>>>{
>>>Object p;
>>>if (m)
>>> p = new Object();
>>>if (m)
>>> p.toString();  // <-- p might not have been
>>> initialized
>>>}
>>>boolean m;
>>>   }
>>>
>>> It even errors out if you write it as:
>>>
>>>   class A
>>>   {
>>>public void foo()
>>>{
>>>Object p;
>>>if (m)
>>> p = new Object();
>>>if (p != null)  // <-- p might not have been initialized
>>> p.toString();
>>>}
>>>boolean m;
>>>   }
>>>
>>> Note that the error message is on the null check!
>>
>> Since when should crap like that ever be written in the first place? In
>> a code review, I'd slap both of those with a giant red "convoluted"
>> stamp, *especially* if it's not just a trivial example like those.
>>
>> Besides, I'd much rather have easily-fixable false positives like that
>> then the false negatives D gets now:
>>
>> Object p;
>> if (m)
>>p = new Object();
>> p.toString(); // Boom!, but *only* at run-time, and *only* if m just
>> happens to be true.
>>
>> Plus, as I've argued before, I *wouldn't* want perfect flow analysis on
>> that, I'd rather have easily-rememberable rules. If the
>> initialization-safety of your code is dependent on complex logic, then
>> you've written it wrong anyway.
>>
>> In simple examples like yours above, the fixes are not only obvious,
>> but much more clear:
>>
>> Object p;
>> if (m)
>> {
>>p = new Object();
>>p.toString();
>> }
>>
>> And in more complex cases, relying on complex logic to ensure things
>> are inited properly is just wrong anyway, as I said above. Seriously,
>> this whole "feature" amounts to nothing more than allowing the
>> following *broken* code to occasionally get overlooked...
>>
>> Object p;
>> if (m)
>>p = new Object();
>> p.toString();
>>
>> ...just for the completely non-existent "convenience" of writing crap
>> like this...
>>
>> Object p;
>> if (m)
>>p = new Object();
>> if (m)
>>p.toString();
>>
>> ...instead of just doing it right:
>>
>> Object p;
>> if (m)
>> {
>>p = new Object();
>>p.toString();
>> }
>>
>> You can label C#-style init-checking "wishy-washy" all you want, but
>> that's still a hell of a lot better than "wrong", which is what D does
>> (as evidenced by my first example above).
>>
>>
> Additionally, the root problem with default values is that they make
> deliberately-default-inited declarations and accidentally-uninited
> declarations completely indistinguishable by both the compiler and the
> programmer.
> 
> (And no, "accidentally-uninited" does *not* imply "undefined value". If
> something's supposed be inited to X and it gets inited to Y, that's
> still *wrong* - *even* if it's reproducibly-wrong.)

When I started with D, some of the main reasons for choosing D were:

 - you can return int from a void function without compilation errors
 - you can use 'void main()' instead of 'int main()' and a silly return 
value 0
 - counter variables are default initialized to 0

I thought it would save so much typing.


Re: Spec#, nullables and more

2010-11-06 Thread retard
Sat, 06 Nov 2010 21:52:22 +0100, Jérôme M. Berger wrote:

> FeepingCreature wrote:
>> Walter Bright Wrote:
>>> All that does is reinvent the null pointer seg fault. The hardware
>>> does this for you for free.
>> 
>> Walter, I know you're a Windows programmer but this cannot be the first
>> time somebody has told you this - YOU CANNOT RECOVER FROM SEG FAULTS
>> UNDER LINUX.
>> 
>> Access violations are not a cross-platform substitute for exceptions.
> 
>   I really, really hope that you can't recover from seg faults on
> Windows (or MacOS for that matter). Segmentation faults should be non
> recoverable: once the program has started accessing memory that it
> shouldn't, there is no way to guarantee that the program is in a
> coherent state. Walter has stated this several times and I agree with
> him 100% on this.

I also agree with him 100% on this.

The problem non-nullable types try to solve is they reduce the number of 
possible segfaults in the first place. I don't care how hard the program 
crashes. I just don't want my client to experience that ever. Segfaults 
are terribly common in C/C++ code. I experience those every week. My 
initial motivation to study/use D was born from these crashes. I can't 
believe you're not using every possible known mechanism to prevent them.


Re: Spec#, nullables and more

2010-11-06 Thread retard
Sat, 06 Nov 2010 11:24:01 -0700, Walter Bright wrote:

> retard wrote:
>> In a functional language:
>> 
>> start_the_car c = case c of
>>   Just car -> start car
>>   Nothing -> error "not initialized"
> 
> And the null pointer exception is reinvented!

What was the point of my post again? To be an inspiration for stupid 
remarks?


  1   2   3   4   5   6   7   >