Re: Java compilation [was GCs in the news]

2014-07-27 Thread Russel Winder via Digitalmars-d
On Sun, 2014-07-27 at 12:51 +, Chris via Digitalmars-d wrote:
> On Sunday, 27 July 2014 at 08:24:44 UTC, Russel Winder via 
> Digitalmars-d wrote:
[…]
> > He also mentions that the C/C++ build aspects of Gradle are to 
> > be used
> > by the Android NDK folk. I already asked them about including D 
> > in the
> > package, but the response was "nobody uses D".
> 
> I am nobody.

I was fairly appalled at the response so I have requested ability to
clone the C/C++ stuff so as to add D and send in pull requests. Whatever
anyone things of Gradle (or SCons) for D, it is looking more and more
like Gradle is the route to build on Android. So if we want D on Android
ensuring "buildable with Gradle" is a way of removing a hurdle.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-27 Thread Chris via Digitalmars-d
On Sunday, 27 July 2014 at 08:24:44 UTC, Russel Winder via 
Digitalmars-d wrote:
On Thu, 2014-07-24 at 11:39 +, Paulo Pinto via 
Digitalmars-d wrote:

[…]
In this specific case yes, but as I mentioned there are lots 
of uses cases being reported.


It turns out to be a "known fact" even in Gradleware. Hans 
mentions it
specifically inhis "vision for the future" document of a month 
ago.


He also mentions that the C/C++ build aspects of Gradle are to 
be used
by the Android NDK folk. I already asked them about including D 
in the

package, but the response was "nobody uses D".


I am nobody.


So maybe we (I guess this
mean I) should do a user contributed patch to add D to the 
whole thing.




Re: Java compilation [was GCs in the news]

2014-07-27 Thread Russel Winder via Digitalmars-d
On Thu, 2014-07-24 at 11:39 +, Paulo Pinto via Digitalmars-d wrote:
[…]
> In this specific case yes, but as I mentioned there are lots of 
> uses cases being reported.

It turns out to be a "known fact" even in Gradleware. Hans mentions it
specifically inhis "vision for the future" document of a month ago.

He also mentions that the C/C++ build aspects of Gradle are to be used
by the Android NDK folk. I already asked them about including D in the
package, but the response was "nobody uses D". So maybe we (I guess this
mean I) should do a user contributed patch to add D to the whole thing.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-24 Thread Paulo Pinto via Digitalmars-d
On Thursday, 24 July 2014 at 11:35:09 UTC, Russel Winder via 
Digitalmars-d wrote:
On Thu, 2014-07-24 at 11:09 +, Paulo Pinto via 
Digitalmars-d wrote:
On Thursday, 24 July 2014 at 11:01:35 UTC, Russel Winder via 
Digitalmars-d wrote:
> On Thu, 2014-07-24 at 09:38 +, Paulo Pinto via 
> Digitalmars-d wrote:

> […]
>> 
>> Nope, Gradle, as shown by the CPU usage on the task manager.

>
> I am surprised, but data always trumps opinion.

One of the first Google results,

http://askubuntu.com/questions/469709/gradle-compiling-slows-down-my-computer

You can find many more out there, in many combinations use 
cases.


Looks like Android Studio tells Gradle to use the number of 
threads that
there are cores, so this is an Android Studio problem, not a 
Gradle

problem per se.


In this specific case yes, but as I mentioned there are lots of 
uses cases being reported.


--
Paulo


Re: Java compilation [was GCs in the news]

2014-07-24 Thread Russel Winder via Digitalmars-d
On Thu, 2014-07-24 at 11:09 +, Paulo Pinto via Digitalmars-d wrote:
> On Thursday, 24 July 2014 at 11:01:35 UTC, Russel Winder via 
> Digitalmars-d wrote:
> > On Thu, 2014-07-24 at 09:38 +, Paulo Pinto via 
> > Digitalmars-d wrote:
> > […]
> >> 
> >> Nope, Gradle, as shown by the CPU usage on the task manager.
> >
> > I am surprised, but data always trumps opinion.
> 
> One of the first Google results,
> 
> http://askubuntu.com/questions/469709/gradle-compiling-slows-down-my-computer
> 
> You can find many more out there, in many combinations use cases.

Looks like Android Studio tells Gradle to use the number of threads that
there are cores, so this is an Android Studio problem, not a Gradle
problem per se.
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-24 Thread Paulo Pinto via Digitalmars-d
On Thursday, 24 July 2014 at 11:01:35 UTC, Russel Winder via 
Digitalmars-d wrote:
On Thu, 2014-07-24 at 09:38 +, Paulo Pinto via 
Digitalmars-d wrote:

[…]


Nope, Gradle, as shown by the CPU usage on the task manager.


I am surprised, but data always trumps opinion.


One of the first Google results,

http://askubuntu.com/questions/469709/gradle-compiling-slows-down-my-computer

You can find many more out there, in many combinations use cases.


Re: Java compilation [was GCs in the news]

2014-07-24 Thread Russel Winder via Digitalmars-d
On Thu, 2014-07-24 at 09:38 +, Paulo Pinto via Digitalmars-d wrote:
[…]
> 
> Nope, Gradle, as shown by the CPU usage on the task manager.

I am surprised, but data always trumps opinion.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-24 Thread Paulo Pinto via Digitalmars-d
On Thursday, 24 July 2014 at 08:34:30 UTC, Russel Winder via 
Digitalmars-d wrote:
On Wed, 2014-07-23 at 21:32 +0200, Paulo Pinto via 
Digitalmars-d wrote:

[…]




The situation is so bad it was even mentioned at this Google 
IO Android developer tools talk.


I think this will be a JetBrains problem rather than a Gradle 
problem.


Nope, Gradle, as shown by the CPU usage on the task manager.

--
Paulo


Re: Java compilation [was GCs in the news]

2014-07-24 Thread Russel Winder via Digitalmars-d
On Wed, 2014-07-23 at 14:37 -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
> On 7/23/14, 12:23 PM, Russel Winder via Digitalmars-d wrote:
> > BTW what's with the rabbit and the monkey?
> 
> He promised his kid they'll go on an adventure with daddy. A really nice 
> touch. I might steal it for my own talks. -- Andrei

Excellent. Perhaps we should make the "a thing". Every speaker must have
their "cuddly toy" companion on stage.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-24 Thread Russel Winder via Digitalmars-d
On Wed, 2014-07-23 at 21:32 +0200, Paulo Pinto via Digitalmars-d wrote:
[…]
> 
> I only tried Graddle because of Android Studio, it makes so bad use of 
> hardware resources, pegging my i7 and core duo processors, that I 
> returned to Eclipse + ADT on the same day.

I have not tried Android Studio for anything as yet. It is based on
IntelliJ IDEA though (as is PyCharm) and IntelliJ IDEA beats Eclipse
hands down for Java and Groovy working (as PyCharm beats Eclipse/PyDev
hands down for Python). For me, YMMV.

> The situation is so bad it was even mentioned at this Google IO Android 
> developer tools talk.

I think this will be a JetBrains problem rather than a Gradle problem.

> This aborted my attempt to try to use Kotlin instead of C++ on my hobby 
> Android projects.

Kotlin is great fun, but I only use IntelliJ IDEA for that.

> As for our Fortune 500 customers portfolio, the ones using Java are 
> still 100% in a mix of Ant and Maven.



I gave up Ant when I wrote Gant (*), and avoided Maven until Gradle
arrived. Humans should not have to hand write XML ever.


(*) Someone forked this to create the Groovy front end to Ant, which
must beat the XML one any and every day of the week.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-24 Thread Russel Winder via Digitalmars-d
On Wed, 2014-07-23 at 22:58 +0200, Paulo Pinto via Digitalmars-d wrote:
[…]
> So far I could only find
> "Looking into the JVM Crystal Ball"
> http://www.parleys.com/play/524f6b5be4b0a43ac12123a9/about
> 
> Between 00:40:00 and 00:45:50, compilation gets discussed, including AOT.
> 
> Not the ones about Graal, though.
> 
> I am pretty sure I saw a slide with it as part of the Java 9+ wishlist,
> now just have to remember if it was actually at JavaONE, Devoxx, FOSDEM 
> or Jax. :\

I'll check this out. I am also getting the folk from the LJC who
represent the LJC on the JCP EC (LJC is an elected members) to get a
definitive statement on the road map.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Walter Bright via Digitalmars-d

On 7/23/2014 1:46 AM, Russel Winder via Digitalmars-d wrote:

I think you'll find HotSpot evolved from a Smalltalk JIT originally.
Borland and Semantec had JVM JITs as well, Sun even licenced the
Semantec one for a while.


Fun fact: the guy who wrote Symantec's JVM JIT, Steve Russell, is the very guy 
who wrote Optlink!


Re: Java compilation [was GCs in the news]

2014-07-23 Thread deadalnix via Digitalmars-d

On Wednesday, 23 July 2014 at 18:45:23 UTC, Andrei Alexandrescu
wrote:

On 7/23/14, 11:40 AM, Andrei Alexandrescu wrote:

On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:
For others: Gradle is becoming the de facto standard build 
framework for

JVM-based things and also Android.


Uhm, I'm literally right now in a talk on Buck
(https://github.com/facebook/buck) at OSCON. -- Andrei


Fresh photo comparing buck with gradle: 
http://i.imgur.com/uGHdfyq.jpg -- Andrei


Say hi to Simon :)


Re: Java compilation [was GCs in the news]

2014-07-23 Thread deadalnix via Digitalmars-d

On Wednesday, 23 July 2014 at 11:54:19 UTC, Atila Neves wrote:

http://benchmarksgame.alioth.debian.org/

There's no good reason for C to beat C++. Even if there were, 
it would be simple to rewrite the C++ bottleneck in C style. 
Likewise, there's no good reason for C to beat D either.


I was surprised by the Java results once they started beating C 
at certain benchmarks years ago. But the fact is it does.


Atila


It usually does in memory intensive benchmark that aren't
multithreaded. Java's GC is a free shot of concurrency that C
won't get.


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Andrei Alexandrescu via Digitalmars-d

On 7/23/14, 12:23 PM, Russel Winder via Digitalmars-d wrote:

BTW what's with the rabbit and the monkey?


He promised his kid they'll go on an adventure with daddy. A really nice 
touch. I might steal it for my own talks. -- Andrei




Re: Java compilation [was GCs in the news]

2014-07-23 Thread Paulo Pinto via Digitalmars-d

Am 23.07.2014 21:27, schrieb Russel Winder via Digitalmars-d:

On Wed, 2014-07-23 at 09:11 +, Paulo Pinto via Digitalmars-d wrote:
[…]



It was presented as such at JavaONE for possible future Java 9+
improvements.

I can try to dig out the presentation, if you wish.


Clearly I need to update my knowledge!



So far I could only find
"Looking into the JVM Crystal Ball"
http://www.parleys.com/play/524f6b5be4b0a43ac12123a9/about

Between 00:40:00 and 00:45:50, compilation gets discussed, including AOT.

Not the ones about Graal, though.

I am pretty sure I saw a slide with it as part of the Java 9+ wishlist,
now just have to remember if it was actually at JavaONE, Devoxx, FOSDEM 
or Jax. :\


--
Paulo


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Brad Anderson via Digitalmars-d

On Wednesday, 23 July 2014 at 09:16:57 UTC, John Colvin wrote:


I am suspicious. I understand that a situation can be contrived 
such that C will lose, but in normal, sensible code the only 
language I've ever seen reliably beat C is FORTRAN.


I'm reminded of when headlines came out saying PyPy was now 
faster than C in some cases. I got pretty excited (that's an 
impressive feat of engineering) but upon looking into it, it 
turned out it was just inlining better than C because the C code 
was making a function call into another library. LTCG/LTO wasn't 
even uncommon at the time and would have easily handled that case 
had it been enabled.


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Paulo Pinto via Digitalmars-d

Am 23.07.2014 21:23, schrieb Russel Winder via Digitalmars-d:

On Wed, 2014-07-23 at 11:45 -0700, Andrei Alexandrescu via Digitalmars-d
wrote:

On 7/23/14, 11:40 AM, Andrei Alexandrescu wrote:

On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:

For others: Gradle is becoming the de facto standard build framework for
JVM-based things and also Android.


Uhm, I'm literally right now in a talk on Buck
(https://github.com/facebook/buck) at OSCON. -- Andrei


Fresh photo comparing buck with gradle: http://i.imgur.com/uGHdfyq.jpg
-- Andrei


Were any of the Gradleware folk there, that should really scare them.

BTW what's with the rabbit and the monkey?



I only tried Graddle because of Android Studio, it makes so bad use of 
hardware resources, pegging my i7 and core duo processors, that I 
returned to Eclipse + ADT on the same day.


The situation is so bad it was even mentioned at this Google IO Android 
developer tools talk.


This aborted my attempt to try to use Kotlin instead of C++ on my hobby 
Android projects.


As for our Fortune 500 customers portfolio, the ones using Java are 
still 100% in a mix of Ant and Maven.


--
Paulo


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Russel Winder via Digitalmars-d
On Wed, 2014-07-23 at 09:16 +, John Colvin via Digitalmars-d wrote:
[…]
> I am suspicious. I understand that a situation can be contrived 
> such that C will lose, but in normal, sensible code the only 
> language I've ever seen reliably beat C is FORTRAN.

For my data parallel computations, I find C++ with TBB tends to be the
winner. C, C++ and Fortran (not FORTRAN!) with OpenMP do fairly well.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Russel Winder via Digitalmars-d
On Wed, 2014-07-23 at 09:11 +, Paulo Pinto via Digitalmars-d wrote:
[…]
> I will happily use it when it gets to the same execution speed 
> and hardware resources than Eclipse +  ADT is currently using.

The way I work with Gradle is to generate Eclipse or IntelliJ IDEA
projects if I am going to use Eclipse or IntelliJ IDEA.

[…]
> > Graal isn't a replacement for HotSpot but a dynamic compilation
> > technology to work with HotSpot. It is actually a very promising
> > technology, I am looking forward to trying it out.
> 
> Yes it is.
> 
> It was presented as such at JavaONE for possible future Java 9+ 
> improvements.
> 
> I can try to dig out the presentation, if you wish.

Clearly I need to update my knowledge!
[…]
> I agree in the cases the toolchain offers both possibilities out 
> of the box and does not force developers to choose among 
> different vendors toolchains.

I am trying to get folk in the JVM benchmarking trade to tell me what
the latest SP is on things.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Russel Winder via Digitalmars-d
On Wed, 2014-07-23 at 11:45 -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
> On 7/23/14, 11:40 AM, Andrei Alexandrescu wrote:
> > On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:
> >> For others: Gradle is becoming the de facto standard build framework for
> >> JVM-based things and also Android.
> >
> > Uhm, I'm literally right now in a talk on Buck
> > (https://github.com/facebook/buck) at OSCON. -- Andrei
> 
> Fresh photo comparing buck with gradle: http://i.imgur.com/uGHdfyq.jpg 
> -- Andrei

Were any of the Gradleware folk there, that should really scare them.

BTW what's with the rabbit and the monkey?

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Andrei Alexandrescu via Digitalmars-d

On 7/23/14, 11:40 AM, Andrei Alexandrescu wrote:

On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:

For others: Gradle is becoming the de facto standard build framework for
JVM-based things and also Android.


Uhm, I'm literally right now in a talk on Buck
(https://github.com/facebook/buck) at OSCON. -- Andrei


Fresh photo comparing buck with gradle: http://i.imgur.com/uGHdfyq.jpg 
-- Andrei





Re: Java compilation [was GCs in the news]

2014-07-23 Thread Andrei Alexandrescu via Digitalmars-d

On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:

For others: Gradle is becoming the de facto standard build framework for
JVM-based things and also Android.


Uhm, I'm literally right now in a talk on Buck 
(https://github.com/facebook/buck) at OSCON. -- Andrei




Re: Java compilation [was GCs in the news]

2014-07-23 Thread Atila Neves via Digitalmars-d

On Wednesday, 23 July 2014 at 09:16:57 UTC, John Colvin wrote:
On Wednesday, 23 July 2014 at 08:46:32 UTC, Russel Winder via 
Digitalmars-d wrote:
On Tue, 2014-07-22 at 10:55 +, Paulo Pinto via 
Digitalmars-d wrote:

[…]

The JVM JIT was originally targeted to SELF, not Java.


I think you'll find HotSpot evolved from a Smalltalk JIT 
originally.
Borland and Semantec had JVM JITs as well, Sun even licenced 
the

Semantec one for a while.

[…]
Functional programming languages have AOT compilers and they 
perform quite well, almost to C level in many use case cases.


True. Java/JVM/JIT also performs very well surpassing C in 
many cases.

Indeed C++ surpasses C in many cases as well.


I am suspicious. I understand that a situation can be contrived 
such that C will lose, but in normal, sensible code the only 
language I've ever seen reliably beat C is FORTRAN.


http://benchmarksgame.alioth.debian.org/

There's no good reason for C to beat C++. Even if there were, it 
would be simple to rewrite the C++ bottleneck in C style. 
Likewise, there's no good reason for C to beat D either.


I was surprised by the Java results once they started beating C 
at certain benchmarks years ago. But the fact is it does.


Atila



Re: Java compilation [was GCs in the news]

2014-07-23 Thread Bienlein via Digitalmars-d



The JVM JIT was originally targeted to SELF, not Java.


Yes, that's right. The guys that developed Self (David Ungar et 
al.) then set out to develop a high-performance typed Smalltalk 
using the optimization techniques they developed for Self. The 
Smalltalk system never hit the market as the development team was 
acquired by Sun before that could happen. The Smalltalk system 
they were working on was released to the public: 
http://www.strongtalk.org/


I think you'll find HotSpot evolved from a Smalltalk JIT 
originally.


The reason I replied to this is that the original technology 
developed for Self was not a JIT. It was a runtime byte code 
optimizer that was put into Java named HotSpot. Since HotSpot 
operates at runtime it can optimize things an optimizing compiler 
could not find at compile time. This is why Java sometimes 
catches up very good performance and in isolated cases can 
compete with C.




Re: Java compilation [was GCs in the news]

2014-07-23 Thread John Colvin via Digitalmars-d
On Wednesday, 23 July 2014 at 08:46:32 UTC, Russel Winder via 
Digitalmars-d wrote:
On Tue, 2014-07-22 at 10:55 +, Paulo Pinto via 
Digitalmars-d wrote:

[…]

The JVM JIT was originally targeted to SELF, not Java.


I think you'll find HotSpot evolved from a Smalltalk JIT 
originally.

Borland and Semantec had JVM JITs as well, Sun even licenced the
Semantec one for a while.

[…]
Functional programming languages have AOT compilers and they 
perform quite well, almost to C level in many use case cases.


True. Java/JVM/JIT also performs very well surpassing C in many 
cases.

Indeed C++ surpasses C in many cases as well.


I am suspicious. I understand that a situation can be contrived 
such that C will lose, but in normal, sensible code the only 
language I've ever seen reliably beat C is FORTRAN.


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Paulo Pinto via Digitalmars-d
On Wednesday, 23 July 2014 at 08:46:32 UTC, Russel Winder via 
Digitalmars-d wrote:
On Tue, 2014-07-22 at 10:55 +, Paulo Pinto via 
Digitalmars-d wrote:

[…]





I avoid touching Gradle.


Your loss!

For others: Gradle is becoming the de facto standard build 
framework for

JVM-based things and also Android.


I will happily use it when it gets to the same execution speed 
and hardware resources than Eclipse +  ADT is currently using.




[…]




But the proof is Microsoft adding .NET Native to their 
toolchain, Google replacing Dalvik with AOT and Oracle has 
added AOT compilation (Substract) to Graal, the candidate to 
Hotspot replacement.


Graal isn't a replacement for HotSpot but a dynamic compilation
technology to work with HotSpot. It is actually a very promising
technology, I am looking forward to trying it out.


Yes it is.

It was presented as such at JavaONE for possible future Java 9+ 
improvements.


I can try to dig out the presentation, if you wish.




[...]

Why is it one or the other? Having both AOT and JIT will likely 
do even

better. Hence Graal on HotSpot.



I agree in the cases the toolchain offers both possibilities out 
of the box and does not force developers to choose among 
different vendors toolchains.


--
Paulo


Re: Java compilation [was GCs in the news]

2014-07-23 Thread Russel Winder via Digitalmars-d
On Tue, 2014-07-22 at 10:55 +, Paulo Pinto via Digitalmars-d wrote:
[…]
> The JVM JIT was originally targeted to SELF, not Java.

I think you'll find HotSpot evolved from a Smalltalk JIT originally.
Borland and Semantec had JVM JITs as well, Sun even licenced the
Semantec one for a while.

[…]
> Functional programming languages have AOT compilers and they 
> perform quite well, almost to C level in many use case cases.

True. Java/JVM/JIT also performs very well surpassing C in many cases.
Indeed C++ surpasses C in many cases as well.

> As for Groovy, I always felt the implementation was always 
> lacking in performance.

True. Groovy is a dynamic language not intended for performance
computation. However it now has static compilation to JVM bytcodes as
well which leads to it being as fast or sometimes faster than Java.

> I avoid touching Gradle.

Your loss!

For others: Gradle is becoming the de facto standard build framework for
JVM-based things and also Android. 

[…]
> 
> I was discussing JIT vs AOT in abstract.

The trouble is that this isn't a good way of discussing what is a
performance issue that can only be decided by comparative benchmarks.

> To be able to perform such a tests you need:
> 
> - A programming language X

In the case at hand X = Java.

> - The state of the art JIT compiler implementation for the given 
> language

I guess HotSpot is the default here, unless anyone has access to the IBM
VM.

> - The state of the art AOT compiler implementation for the given 
> language
> 
> I know a few commercial AOT compilers for Java, not sure which 
> one would be the best one to choose.

I am not sure which I would go with here as I have little experience of
the high cost products. We'd have to get some sponsorship for the
benchmarks. I will ask around the folks in the JVM performance
community.

> But the proof is Microsoft adding .NET Native to their toolchain, 
> Google replacing Dalvik with AOT and Oracle has added AOT 
> compilation (Substract) to Graal, the candidate to Hotspot 
> replacement.

Graal isn't a replacement for HotSpot but a dynamic compilation
technology to work with HotSpot. It is actually a very promising
technology, I am looking forward to trying it out.

> So apparently they all agree AOT still wins in many scenarios.

Why is it one or the other? Having both AOT and JIT will likely do even
better. Hence Graal on HotSpot.

Certainly AOT putting the burden on compilation, ensures there is no
start-up overhead, so is a benefit for short running systems. JIT has an
initial (often large) overhead but once triggered produces highly
performant (localized) code. Java is going to have to find the balance
to stay up with the performance needed these days.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Java compilation [was GCs in the news]

2014-07-22 Thread Paulo Pinto via Digitalmars-d
On Tuesday, 22 July 2014 at 08:10:31 UTC, Russel Winder via 
Digitalmars-d wrote:
On Tue, 2014-07-22 at 06:35 +, Paulo Pinto via 
Digitalmars-d wrote:

[…]
Yes it can, if developers bother to do PGO + AOT instead and 
learn the compiler flags.


I used to have a stronger opinion on JIT, but given how many 
JITs perform and do not actually use the hardware as they, in 
theory could, JIT tend to only be an advantage for dynamic 
languages not strong typed ones.


With JIT, writing the code in a way that makes the JIT 
compiler happy is a lost battle, as it depends on the exact 
same JIT implementation being available on the deployment 
system.


I think you have to make good on this claim since the JVM JIT is
intended for Java which is supposedly a static, strongly typed 
language.


The JVM JIT was originally targeted to SELF, not Java.

Moreover, evidence from Groovy is the JVM JIT provides only 
patchy
benefit. The biggest benefit all round is invokedynamic for 
both static
and dynamic languages. Java 8 would be nothing without 
invokedynamic.



Functional programming languages have AOT compilers and they 
perform quite well, almost to C level in many use case cases.


As for Groovy, I always felt the implementation was always 
lacking in performance.


I avoid touching Gradle.



But maybe we should take this off this list as it is way off 
topic.


Clearly we can use JMH for benchmarking. I have a couple of 
codes I

could use to try things out.

So:

1. How to compile and execute to get full AOT *and* switch off 
the JIT.
2. How to compile and execute to get no AOT and have JIT on 
full.


then we can begin to compare.


I was discussing JIT vs AOT in abstract.

To be able to perform such a tests you need:

- A programming language X
- The state of the art JIT compiler implementation for the given 
language
- The state of the art AOT compiler implementation for the given 
language


I know a few commercial AOT compilers for Java, not sure which 
one would be the best one to choose.



But the proof is Microsoft adding .NET Native to their toolchain, 
Google replacing Dalvik with AOT and Oracle has added AOT 
compilation (Substract) to Graal, the candidate to Hotspot 
replacement.


So apparently they all agree AOT still wins in many scenarios.

--
Paulo


Re: Java compilation [was GCs in the news]

2014-07-22 Thread Russel Winder via Digitalmars-d
On Tue, 2014-07-22 at 06:35 +, Paulo Pinto via Digitalmars-d wrote:
[…]
> Yes it can, if developers bother to do PGO + AOT instead and 
> learn the compiler flags.
> 
> I used to have a stronger opinion on JIT, but given how many JITs 
> perform and do not actually use the hardware as they, in theory 
> could, JIT tend to only be an advantage for dynamic languages not 
> strong typed ones.
> 
> With JIT, writing the code in a way that makes the JIT compiler 
> happy is a lost battle, as it depends on the exact same JIT 
> implementation being available on the deployment system.

I think you have to make good on this claim since the JVM JIT is
intended for Java which is supposedly a static, strongly typed language.
Moreover, evidence from Groovy is the JVM JIT provides only patchy
benefit. The biggest benefit all round is invokedynamic for both static
and dynamic languages. Java 8 would be nothing without invokedynamic.

But maybe we should take this off this list as it is way off topic.

Clearly we can use JMH for benchmarking. I have a couple of codes I
could use to try things out.

So:

1. How to compile and execute to get full AOT *and* switch off the JIT.
2. How to compile and execute to get no AOT and have JIT on full.

then we can begin to compare.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: GCs in the news

2014-07-21 Thread Paulo Pinto via Digitalmars-d
On Monday, 21 July 2014 at 18:31:46 UTC, Russel Winder via 
Digitalmars-d wrote:
On Sun, 2014-07-20 at 16:40 +, Paulo Pinto via 
Digitalmars-d wrote:


[…]
Java has AOT compilers available since the early days. Most 
developers just tend to ignore them, because they are not part 
of the free package.


Also, it is not entirely clear that AOT optimization can beat 
JIT

optimization, at least on the JVM.


Yes it can, if developers bother to do PGO + AOT instead and 
learn the compiler flags.


I used to have a stronger opinion on JIT, but given how many JITs 
perform and do not actually use the hardware as they, in theory 
could, JIT tend to only be an advantage for dynamic languages not 
strong typed ones.


With JIT, writing the code in a way that makes the JIT compiler 
happy is a lost battle, as it depends on the exact same JIT 
implementation being available on the deployment system.


--
Paulo


Re: GCs in the news

2014-07-21 Thread deadalnix via Digitalmars-d

On Monday, 21 July 2014 at 18:31:46 UTC, Russel Winder via
Digitalmars-d wrote:
On Sun, 2014-07-20 at 16:40 +, Paulo Pinto via 
Digitalmars-d wrote:


[…]
Java has AOT compilers available since the early days. Most 
developers just tend to ignore them, because they are not part 
of the free package.


Also, it is not entirely clear that AOT optimization can beat 
JIT

optimization, at least on the JVM.


They probably aren't mutually exclusive.


Re: GCs in the news

2014-07-21 Thread Russel Winder via Digitalmars-d
On Sun, 2014-07-20 at 16:40 +, Paulo Pinto via Digitalmars-d wrote:

[…]
> Java has AOT compilers available since the early days. Most 
> developers just tend to ignore them, because they are not part of 
> the free package.

Also, it is not entirely clear that AOT optimization can beat JIT
optimization, at least on the JVM.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: GCs in the news

2014-07-21 Thread Kagamin via Digitalmars-d

On Sunday, 20 July 2014 at 12:30:02 UTC, Mike wrote:
Yes, I believe you are correct.  I also believe there is even a 
GCStub in the runtime that uses malloc without free.  What's 
missing is API documentation and examples that makes such 
features accessible.


The existing functions should be understandable, so you can 
document them yourself. If you want to standardize the API, you 
can write a small wrapper library, which will account for 
possible internal API changes and map them to your standard API. 
Examples are up to you, since nobody knows, what features you 
will implement in your GC implementation and what API they should 
have. You have gcstub as an example with GC proxy substitution 
API.


In short, IMO, D should not embrace one type of automatic 
memory management, they should make it extensible.  In time two 
ore three high quality memory managers will prevail.


It's a matter of writing an appropriate library and providing it 
as a dub module. Do you know the best, what you want, you are the 
one to make your wish come to life.


Re: GCs in the news

2014-07-21 Thread Paulo Pinto via Digitalmars-d

On Friday, 18 July 2014 at 09:25:46 UTC, Chris wrote:
On Thursday, 17 July 2014 at 18:19:04 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Thu, Jul 17, 2014 at 05:58:14PM +, Chris via 
Digitalmars-d wrote:
On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via 
Digitalmars-d

wrote:

[...]
>AFAIK some work still needs to be done with std.string; 
>Walter for
>one has started some work to implement range-based 
>equivalents for
>std.string functions, which would be non-allocating; we just 
>need a

>bit of work to push things through.
>
>DMD 2.066 will have @nogc, which will make it easy to 
>discover which
>remaining parts of Phobos are still not GC-free. Then we'll 
>know

>where to direct our efforts. :-)
>
>
>T

That's good news! See, we're getting there, just bear with 
us. This
begs the question of course, how will this affect existing 
code? My

code is string intensive.


I don't think it will affect existing code (esp. given 
Walter's stance
on breaking changes!). Probably the old GC-based string 
functions will
still be around for backwards-compatibility. Perhaps some of 
them might
be replaced with non-GC versions where it can be done 
transparently, but
I'd expect you'd need to rewrite your string code to take 
advantage of
the new range-based stuff. Hopefully the rewrites will be 
minimal (e.g.,
pass in an output range as argument instead of getting a 
returned
string, replace allocation-based code with a UFCS chain, 
etc.). The

ideal scenario may very well be as simple as tacking on
`.copy(myBuffer)` at the end of a UFCS chain. :-P


T


That sounds good to me! This gives me time to upgrade my old 
code little by little and use the new approach when writing new 
code. Phew!


By the way, my code is string intensive and I still have some 
suboptimal (greedy) ranges here and there. But believe it or 
not, they're no problem at all. The application (a plugin for a 
screen reader) is fast and responsive* (according to user 
feedback) like any other screen reader plugin, and it hasn't 
crashed for ages (thanks to GC?) - knock on wood! I use a lot 
of lazy ranges too plus some pointer magic for work intensive 
algorithms. Plus D let me easily model the various relations 
between text and speech (for other use cases down the road). 
Maybe it is not a real time system, but it has to be 
responsive. So far, GC hasn't affected it negatively. Once the 
online version will be publicly available, I will report how 
well vibe.d performs. Current results are encouraging.


As regards Java, the big advantage of D is that it compiles to 
a native DLL and all users have to do is to double click on it 
to install. No "please download JVM" nightmare. I've been 
there. Users cannot handle it (why should they?), and to 
provide it as a developer is a waste of time and resources, and 
it might still go wrong which leaves both the users and the 
developers angry and frustrated.


* The only thing that bothers me is that there seems to be a 
slight audio latency problem on Windows, which is not D's 
fault. On Linux it speaks as soon as you press .


Java has AOT compilers available since the early days. Most 
developers just tend to ignore them, because they are not part of 
the free package.


--
Paulo


Re: GCs in the news

2014-07-20 Thread Mike via Digitalmars-d

On Sunday, 20 July 2014 at 12:07:47 UTC, Kagamin wrote:

On Sunday, 20 July 2014 at 11:44:56 UTC, Mike wrote:
Being able to specify an alternate memory manager at 
compile-time, link-time and/or runtime would be most 
advantageous, and probably put an end to the GC-phobia.


AFAIK, GC is not directly referenced in druntime, so you 
already should be able to link with different GC 
implementation. If you provide all symbols requested by the 
code, the linker won't link default GC module.


Yes, I believe you are correct.  I also believe there is even a 
GCStub in the runtime that uses malloc without free.  What's 
missing is API documentation and examples that makes such 
features accessible.


Also missing, are language/runtime hooks that could allow users 
to try alternative memory management schemes such as ARC and find 
what works best for them through experimentation.


In short, IMO, D should not embrace one type of automatic memory 
management, they should make it extensible.  In time two ore 
three high quality memory managers will prevail.


Mike


Re: GCs in the news

2014-07-20 Thread Kagamin via Digitalmars-d

On Sunday, 20 July 2014 at 11:44:56 UTC, Mike wrote:
Being able to specify an alternate memory manager at 
compile-time, link-time and/or runtime would be most 
advantageous, and probably put an end to the GC-phobia.


AFAIK, GC is not directly referenced in druntime, so you already 
should be able to link with different GC implementation. If you 
provide all symbols requested by the code, the linker won't link 
default GC module.


Re: GCs in the news

2014-07-20 Thread Mike via Digitalmars-d
On Sunday, 20 July 2014 at 08:41:16 UTC, Iain Buclaw via 
Digitalmars-d wrote:

On 17 Jul 2014 13:40, "w0rp via Digitalmars-d"
The key to making D's GC acceptable lies in two factors I 
believe.


1. Improve the implementation enough so that you will only be 
impacted by

GC in extermely low memory or real time environments.
2. Defer allocation more and more by using ranges and 
algorithms more,

and trust that compiler optimisations will make these fast.




How about
1. Make it easier to select which GC you want to use at runtime 
init.
2. Write an alternate GC aimed at different application uses 
(ie: real-time)




Yes, Please!

Being able to specify an alternate memory manager at 
compile-time, link-time and/or runtime would be most 
advantageous, and probably put an end to the GC-phobia.


DIP46 [1] also proposes and interesting alternative to the GC by 
creating regions at runtime.


And given the passion surrounding the GC in this community, if 
runtime hooks and/or a suitable API for custom memory managers 
were created and documented, it would invite participation and an 
informal, highly competitive contest for the best GC would likely 
ensue.


Mike

[1] http://wiki.dlang.org/DIP46


Re: GCs in the news

2014-07-20 Thread Iain Buclaw via Digitalmars-d
On 17 Jul 2014 13:40, "w0rp via Digitalmars-d" 
wrote:
>
> The key to making D's GC acceptable lies in two factors I believe.
>
> 1. Improve the implementation enough so that you will only be impacted by
GC in extermely low memory or real time environments.
> 2. Defer allocation more and more by using ranges and algorithms more,
and trust that compiler optimisations will make these fast.
>

How about
1. Make it easier to select which GC you want to use at runtime init.
2. Write an alternate GC aimed at different application uses (ie: real-time)

We already have (at least) three GC implementations for D.

Regards
Iain


Re: GCs in the news

2014-07-20 Thread safety0ff via Digitalmars-d

On Saturday, 19 July 2014 at 21:12:44 UTC, Walter Bright wrote:


3. slices become mostly unworkable, and slices are a fantastic 
way to speed up a program


They are even more fantastic for speeding up programming.
I think that programmer time isn't included often enough in 
discussions.


I have a program which I used D to quickly prototype and form my 
baseline implementation.
After getting a semi-refined implementation I converted the 
performance critical part to C++.
The D code that survived the rewrite uses slices + ranges, and 
it's not worth converting that to C++ code (it would be less 
elegant and isn't worth the time.)


The bottom line is that without D's slices, I might not have 
bothered bringing that small project to the level of completion 
it has today.


Re: GCs in the news

2014-07-19 Thread Walter Bright via Digitalmars-d

On 7/17/2014 11:44 AM, Russel Winder via Digitalmars-d wrote:

With C++ I am coming to grips with RAII management of the heap. With
Java, Groovy, Go and Python I rely on the GC doing a good job. I note
though that there is a lot of evidence that the Unreal folk developed a
garbage collector for C++ exactly because they didn't want to do the
RAII thing.


RAII has a lot of costs associated with it that I am often surprised go 
completely unrecognized by the RAII comunity:


1. the "dec" operation (i.e. shared_ptr) is expensive

2. the inability to freely mix pointers allocated with different schemes

3. slices become mostly unworkable, and slices are a fantastic way to speed up a 
program




Re: GCs in the news

2014-07-19 Thread Walter Bright via Digitalmars-d

On 7/17/2014 5:06 PM, H. S. Teoh via Digitalmars-d wrote:

MyOutputRange sink; // allocate using whatever scheme you want
myInput.withoutTabs.copy(sink);

The algorithm itself doesn't need to know where the result will end up
-- sink could be stdout, in which case no allocation is needed at all.


Exactly! The algorithm becomes completely divorced from the memory allocation. I 
believe this is a very powerful technique.




Re: GCs in the news

2014-07-19 Thread Kagamin via Digitalmars-d

On Thursday, 17 July 2014 at 19:14:06 UTC, Right wrote:
 I'm rather fond of RAII, I find that I rarely every need 
shared semantics.
 I use a custom object model that allows for weak_ptrs to 
unique_ptrs which I think removes some cases where people might 
otherwise be inclined to use shared_ptr.


 Shared semantics are so rare in fact I would say I hardly use 
it at all, I go for weeks of coding without creating a shared 
type, not because I'm trying to do so, but because it just 
isn't necessary.


 Which is why GC seems like such a waste, given my experience 
in C++, where I hardly need shared memory, I see little use for 
a GC(or even ARC etc), all it will do is decrease program 
performance, make deterministic destruction impossible, and 
prevent automatic cleanup of none memory resources.


 Rust seems to have caught on to what C++ has accomplished here.


Though, GC is safer, easier and cheaper than ownership model, 
which is possible in D too, if you want it.


Re: GCs in the news

2014-07-19 Thread via Digitalmars-d

On Thursday, 17 July 2014 at 14:05:02 UTC, Brian Rogoff wrote:

On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:
If D came without GC, it would have replaced C++ a long time 
ago!


That's overly optimistic I think, but I believe that the 
adoption rate would have been far greater for a D without GC, 
or perhaps with a more GC friendly design, as the GC comes up 
first or close in every D discussion with prospective adopters.


This claim is being made frequently, but you need to consider 
that D started out as a more simpler language than it is today. 
Many of the distinguishing advantages of D can only be made 
possible _in a safe way_ when there is a GC. Everyone seems to 
agree, for example, that array slicing is one of these features. 
Without a GC, you'd either have to add a complicated reference 
counting scheme, thus destroying performance and simplicity, or 
you'd have to rely on the user for ownership management, which is 
unsafe. (A third way would be borrowing, which D doesn't have 
(yet).) I also believe that the Range concept was introduced at a 
later stage in D's history, thus the GC avoidance strategies that 
are being implemented in Phobos right now weren't available back 
then.


Therefore I cannot agree that D would have been adopted more 
eagerly without a GC; in fact, the adoption rate would have 
likely been less, because the language would have been crippled.




However, it's way too late to change that now. IMO, the way 
forward involves removing all or most hidden allocations from 
the D libraries, making programming sans GC easier (@nogc 
everywhere, a compiler switch, documentation for how to work 
around the lack of GC, etc.) and a much better, precise GC as 
part of the D release. Any spec changes necessary to support 
precision should be in a fast path.


Add borrowing!


Re: GCs in the news

2014-07-18 Thread Dicebot via Digitalmars-d
On Friday, 18 July 2014 at 00:08:17 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Thu, Jul 17, 2014 at 06:32:58PM +, Dicebot via 
Digitalmars-d wrote:
On Thursday, 17 July 2014 at 18:22:11 UTC, H. S. Teoh via 
Digitalmars-d

wrote:
>Actually, I've realized that output ranges are really only 
>useful
>when you want to store the final result. For data in 
>mid-processing,
>you really want to be exporting an input (or higher) range 
>interface

>instead, because functions that take output ranges are not
>composable.  And for storing final results, you just use
>std.algorithm.copy, so there's really no need for many 
>functions to

>take an output range at all.

Plain algorithm ranges rarely need to allocate at all so those 
are
somewhat irrelevant to the topic. What I am speaking about are 
variety

of utility functions like this:

S detab(S)(S s, size_t tabSize = 8)
if (isSomeString!S)

this allocates result string. Proper alternative:

S detab(S)(ref S output, size_t tabSize = 8)
if (isSomeString!S);

plus

void detab(S, OR)(OR output, size_t tab_Size = 8)
if (   isSomeString!S
&& isSomeString!(ElementType!OR)
   )


I think you're missing the input parameter. :)

void detab(S, OR)(S s, OR output, size_t tabSize = 8) { ... }

I argue that you can just turn it into this:

auto withoutTabs(S)(S s, size_t tabSize = 8)
{
static struct Result {
... // implementation here
}
static assert(isInputRange!Result);
return Result(s, tabSize);
}

auto myInput = "...";
auto detabbedInput = myInput.withoutTabs.array;

// Or:
MyOutputRange sink; // allocate using whatever scheme you want
myInput.withoutTabs.copy(sink);

The algorithm itself doesn't need to know where the result will 
end up
-- sink could be stdout, in which case no allocation is needed 
at all.


Yes this looks better.


Re: GCs in the news

2014-07-18 Thread Chris via Digitalmars-d
On Thursday, 17 July 2014 at 18:19:04 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Thu, Jul 17, 2014 at 05:58:14PM +, Chris via 
Digitalmars-d wrote:
On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via 
Digitalmars-d

wrote:

[...]
>AFAIK some work still needs to be done with std.string; 
>Walter for
>one has started some work to implement range-based 
>equivalents for
>std.string functions, which would be non-allocating; we just 
>need a

>bit of work to push things through.
>
>DMD 2.066 will have @nogc, which will make it easy to 
>discover which
>remaining parts of Phobos are still not GC-free. Then we'll 
>know

>where to direct our efforts. :-)
>
>
>T

That's good news! See, we're getting there, just bear with us. 
This
begs the question of course, how will this affect existing 
code? My

code is string intensive.


I don't think it will affect existing code (esp. given Walter's 
stance
on breaking changes!). Probably the old GC-based string 
functions will
still be around for backwards-compatibility. Perhaps some of 
them might
be replaced with non-GC versions where it can be done 
transparently, but
I'd expect you'd need to rewrite your string code to take 
advantage of
the new range-based stuff. Hopefully the rewrites will be 
minimal (e.g.,
pass in an output range as argument instead of getting a 
returned
string, replace allocation-based code with a UFCS chain, etc.). 
The

ideal scenario may very well be as simple as tacking on
`.copy(myBuffer)` at the end of a UFCS chain. :-P


T


That sounds good to me! This gives me time to upgrade my old code 
little by little and use the new approach when writing new code. 
Phew!


By the way, my code is string intensive and I still have some 
suboptimal (greedy) ranges here and there. But believe it or not, 
they're no problem at all. The application (a plugin for a screen 
reader) is fast and responsive* (according to user feedback) like 
any other screen reader plugin, and it hasn't crashed for ages 
(thanks to GC?) - knock on wood! I use a lot of lazy ranges too 
plus some pointer magic for work intensive algorithms. Plus D let 
me easily model the various relations between text and speech 
(for other use cases down the road). Maybe it is not a real time 
system, but it has to be responsive. So far, GC hasn't affected 
it negatively. Once the online version will be publicly 
available, I will report how well vibe.d performs. Current 
results are encouraging.


As regards Java, the big advantage of D is that it compiles to a 
native DLL and all users have to do is to double click on it to 
install. No "please download JVM" nightmare. I've been there. 
Users cannot handle it (why should they?), and to provide it as a 
developer is a waste of time and resources, and it might still go 
wrong which leaves both the users and the developers angry and 
frustrated.


* The only thing that bothers me is that there seems to be a 
slight audio latency problem on Windows, which is not D's fault. 
On Linux it speaks as soon as you press .




Re: GCs in the news

2014-07-18 Thread Kagamin via Digitalmars-d

On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:
Just from watching a few of the DConf 2014 talks, if you want 
performance you avoid the GC at all costs (even if that means 
allocating into huge predefined buffers). Once you're going to 
these lengths to avoid garbage collection it begs the question, 
why are you even using this language?


In D you have a choice to use GC or not use it. You would want to 
not use if you have a severe performance problem, which may or 
may not exist.
There's no guarantee another language is a silver bullet and will 
magically solve all problems.


Re: GCs in the news

2014-07-17 Thread Walter Bright via Digitalmars-d

On 7/17/2014 10:47 PM, H. S. Teoh via Digitalmars-d wrote:

Deferring the allocation point to the top level has the advantage of
letting high-level user code decide what the allocation strategy should
be, rather than percolating that decision down the call graph to every
low-level function.


Exactly.


Of course, it's not always possible to defer this, such as if you need
to tell a container which allocator to use. But IMO this should be
pushed up to higher-level code whenever possible.


Andrei's allocator scheme addresses this. It will also allow such decisions to 
be made at the high level.




Re: GCs in the news

2014-07-17 Thread Walter Bright via Digitalmars-d

On 7/17/2014 11:17 AM, H. S. Teoh via Digitalmars-d wrote:

I don't think it will affect existing code (esp. given Walter's stance
on breaking changes!). Probably the old GC-based string functions will
still be around for backwards-compatibility. Perhaps some of them might
be replaced with non-GC versions where it can be done transparently, but
I'd expect you'd need to rewrite your string code to take advantage of
the new range-based stuff. Hopefully the rewrites will be minimal (e.g.,
pass in an output range as argument instead of getting a returned
string, replace allocation-based code with a UFCS chain, etc.). The
ideal scenario may very well be as simple as tacking on
`.copy(myBuffer)` at the end of a UFCS chain. :-P


Boss, dat's pretty much de plan, de plan!



Re: GCs in the news

2014-07-17 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 17, 2014 at 10:33:26PM -0700, Walter Bright via Digitalmars-d wrote:
> On 7/17/2014 3:16 PM, Dicebot wrote:
> >On Thursday, 17 July 2014 at 22:06:01 UTC, Brad Anderson wrote:
> >>I agreed with this for awhile but following the conversation here
> >> I'm
> >>more inclined to think we should be adding lazy versions of
> >>functions where possible rather than versions with OutputRange
> >>parameters. It's more flexible that way and can result in even fewer
> >>allocations than even OutputRange parameters would have (i.e. you
> >>can have chains of lazy operations and only allocate on the final
> >>step, or not at all in some cases).
> >>
> >>Laziness isn't appropriate or possible everywhere but it's much
> >>easier to go from lazy to eager than the other way around.
> >>
> >>>[...]
> >
> >This is not comparable. Lazy input range based solutions do not make
> >it possible to change allocation strategy, they simply defer the
> >allocation point. Ideally both are needed.
> 
> They move the allocation point to the top level, rather than the
> bottom or intermediate level.

Deferring the allocation point to the top level has the advantage of
letting high-level user code decide what the allocation strategy should
be, rather than percolating that decision down the call graph to every
low-level function.

Of course, it's not always possible to defer this, such as if you need
to tell a container which allocator to use. But IMO this should be
pushed up to higher-level code whenever possible.


T

-- 
Why can't you just be a nonconformist like everyone else? -- YHL


Re: GCs in the news

2014-07-17 Thread Walter Bright via Digitalmars-d

On 7/17/2014 11:32 AM, Dicebot wrote:

Plain algorithm ranges rarely need to allocate at all so those are somewhat
irrelevant to the topic. What I am speaking about are variety of utility
functions like this:

S detab(S)(S s, size_t tabSize = 8)
 if (isSomeString!S)

this allocates result string. Proper alternative:

S detab(S)(ref S output, size_t tabSize = 8)
 if (isSomeString!S);

plus

void detab(S, OR)(OR output, size_t tab_Size = 8)
 if (   isSomeString!S
 && isSomeString!(ElementType!OR)
)


That algorithm takes a string and writes to an output range. This is not very 
composable. For example, what if one has an input range of chars, rather than a 
string? And what if one wants to tack more processing on the end?


A better interface is the one used by the byChar, byWchar, and byDchar ranges 
recently added to std.utf. Those accept an input range, and present an input 
range as "output". They are very composable, and can be stuck in anywhere in a 
character processing pipeline. They do no allocations, and are completely lazy.


The byChar algorithm in particular can serve as an outline for how to do a detab 
algorithm, most of the code can be reused for that.


Re: GCs in the news

2014-07-17 Thread Walter Bright via Digitalmars-d

On 7/17/2014 4:01 PM, H. S. Teoh via Digitalmars-d wrote:

As Brad said, it's far easier to go from lazy to eager than the other
way round, e.g., by sticking .array at the end, or .copy(buf) where buf
is allocated according to whatever scheme the user chooses. Since buf is
declared by the user, the user is free to use whatever allocation
mechanism he wishes, the string algorithm doesn't know nor care what it
is (and it shouldn't need to).


Yup. It enables separating the allocation strategy from the algorithm.



Re: GCs in the news

2014-07-17 Thread Walter Bright via Digitalmars-d

On 7/17/2014 3:16 PM, Dicebot wrote:

On Thursday, 17 July 2014 at 22:06:01 UTC, Brad Anderson wrote:

I agreed with this for awhile but following the conversation here
 I'm more inclined
to think we should be adding lazy versions of functions where possible rather
than versions with OutputRange parameters. It's more flexible that way and can
result in even fewer allocations than even OutputRange parameters would have
(i.e. you can have chains of lazy operations and only allocate on the final
step, or not at all in some cases).

Laziness isn't appropriate or possible everywhere but it's much easier to go
from lazy to eager than the other way around.


[...]


This is not comparable. Lazy input range based solutions do not make it possible
to change allocation strategy, they simply defer the allocation point. Ideally
both are needed.


They move the allocation point to the top level, rather than the bottom or 
intermediate level.


Re: GCs in the news

2014-07-17 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 17, 2014 at 06:32:58PM +, Dicebot via Digitalmars-d wrote:
> On Thursday, 17 July 2014 at 18:22:11 UTC, H. S. Teoh via Digitalmars-d
> wrote:
> >Actually, I've realized that output ranges are really only useful
> >when you want to store the final result. For data in mid-processing,
> >you really want to be exporting an input (or higher) range interface
> >instead, because functions that take output ranges are not
> >composable.  And for storing final results, you just use
> >std.algorithm.copy, so there's really no need for many functions to
> >take an output range at all.
> 
> Plain algorithm ranges rarely need to allocate at all so those are
> somewhat irrelevant to the topic. What I am speaking about are variety
> of utility functions like this:
> 
> S detab(S)(S s, size_t tabSize = 8)
> if (isSomeString!S)
> 
> this allocates result string. Proper alternative:
> 
> S detab(S)(ref S output, size_t tabSize = 8)
> if (isSomeString!S);
> 
> plus
> 
> void detab(S, OR)(OR output, size_t tab_Size = 8)
> if (   isSomeString!S
> && isSomeString!(ElementType!OR)
>)

I think you're missing the input parameter. :)

void detab(S, OR)(S s, OR output, size_t tabSize = 8) { ... }

I argue that you can just turn it into this:

auto withoutTabs(S)(S s, size_t tabSize = 8)
{
static struct Result {
... // implementation here
}
static assert(isInputRange!Result);
return Result(s, tabSize);
}

auto myInput = "...";
auto detabbedInput = myInput.withoutTabs.array;

// Or:
MyOutputRange sink; // allocate using whatever scheme you want
myInput.withoutTabs.copy(sink);

The algorithm itself doesn't need to know where the result will end up
-- sink could be stdout, in which case no allocation is needed at all.


Or are you talking about in-place modification of the input string?
That's a different kettle o' fish.


T

-- 
EMACS = Extremely Massive And Cumbersome System


Re: GCs in the news

2014-07-17 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 17, 2014 at 10:27:51PM +, Dicebot via Digitalmars-d wrote:
> On Thursday, 17 July 2014 at 22:21:54 UTC, Brad Anderson wrote:
> >Well the idea is that you then copy into an output range with
> >whatever allocation strategy you want at the end. There is quite a
> >bit of overlap I think. Not complete overlap and OutputRange
> >accepting functions will still be needed but I think we should prefer
> >the lazy approach where possible.
> 
> It is not always possible - sometimes resulting range element must be
> already "cooked" object.

Example?


> I do agree it is a powerful default when feasible though. At the same
> time simple output range overloads is much faster to add.

As Brad said, it's far easier to go from lazy to eager than the other
way round, e.g., by sticking .array at the end, or .copy(buf) where buf
is allocated according to whatever scheme the user chooses. Since buf is
declared by the user, the user is free to use whatever allocation
mechanism he wishes, the string algorithm doesn't know nor care what it
is (and it shouldn't need to).


T

-- 
What do you mean the Internet isn't filled with subliminal messages? What about 
all those buttons marked "submit"??


Re: GCs in the news

2014-07-17 Thread Chris via Digitalmars-d

On Thursday, 17 July 2014 at 22:27:52 UTC, Dicebot wrote:

On Thursday, 17 July 2014 at 22:21:54 UTC, Brad Anderson wrote:
Well the idea is that you then copy into an output range with 
whatever allocation strategy you want at the end. There is 
quite a bit of overlap I think. Not complete overlap and 
OutputRange accepting functions will still be needed but I 
think we should prefer the lazy approach where possible.


It is not always possible - sometimes resulting range element 
must be already "cooked" object. I do agree it is a powerful 
default when feasible though. At the same time simple output 
range overloads is much faster to add.


From what I'm getting is that we might have the chance here to 
redefine memory usage, as was pointed out by Teoh et al. Reduce 
allocations as much as possible, avoiding a problem in the first 
place is better than solving it. It's worth thinking in this 
direction, cos the GC / RC issue will always boil down to the 
fact that there is a price to be paid.


Re: GCs in the news

2014-07-17 Thread Dicebot via Digitalmars-d

On Thursday, 17 July 2014 at 22:21:54 UTC, Brad Anderson wrote:
Well the idea is that you then copy into an output range with 
whatever allocation strategy you want at the end. There is 
quite a bit of overlap I think. Not complete overlap and 
OutputRange accepting functions will still be needed but I 
think we should prefer the lazy approach where possible.


It is not always possible - sometimes resulting range element 
must be already "cooked" object. I do agree it is a powerful 
default when feasible though. At the same time simple output 
range overloads is much faster to add.


Re: GCs in the news

2014-07-17 Thread Right via Digitalmars-d


 UE4 wasn't really rewritten from scratch, was more like, take 
UE3,  rewrite various parts and add new features, keep doing that 
for a few years--


 Code style isn't modern C++.
No lambda, r-value refs, unique types, algorithms(everyone just 
bangs out for loops), task implementation is laughable, code 
mostly single threaded.


Basically verbosity hell.

 The dependency on GC is the same as previous versions, they did 
not fundamentally change the object model in UE4. I think they 
did work on the GC, so perhaps it is faster /shrug. They only use 
the GC for certain objects(deriving UObject).


 Powerful engine? Yes for sure. If I needed to make a graphically 
AAA game ASAP I'd use UE4. Doesn't change the fact that the code 
is nothing impressive.


The Blueprint system technically compiles down to UnrealScript 
bytecode-- but yes Unrealscript is dead, thankfully.




UEngine has been rewritten from scratch.
UnrealScript doesn't even exist anymore.
It is the new UEngine that depends on GC, and we're talking 
C++, not UnrealScript here (again, UnrealScript is gone).




Re: GCs in the news

2014-07-17 Thread Brad Anderson via Digitalmars-d

On Thursday, 17 July 2014 at 22:16:10 UTC, Dicebot wrote:

On Thursday, 17 July 2014 at 22:06:01 UTC, Brad Anderson wrote:
I agreed with this for awhile but following the conversation 
here 
 
I'm more inclined to think we should be adding lazy versions 
of functions where possible rather than versions with 
OutputRange parameters. It's more flexible that way and can 
result in even fewer allocations than even OutputRange 
parameters would have (i.e. you can have chains of lazy 
operations and only allocate on the final step, or not at all 
in some cases).


Laziness isn't appropriate or possible everywhere but it's 
much easier to go from lazy to eager than the other way around.



[...]


This is not comparable. Lazy input range based solutions do not 
make it possible to change allocation strategy, they simply 
defer the allocation point. Ideally both are needed.


Well the idea is that you then copy into an output range with 
whatever allocation strategy you want at the end. There is quite 
a bit of overlap I think. Not complete overlap and OutputRange 
accepting functions will still be needed but I think we should 
prefer the lazy approach where possible.


Re: GCs in the news

2014-07-17 Thread Dicebot via Digitalmars-d

On Thursday, 17 July 2014 at 22:06:01 UTC, Brad Anderson wrote:
I agreed with this for awhile but following the conversation 
here 
 
I'm more inclined to think we should be adding lazy versions of 
functions where possible rather than versions with OutputRange 
parameters. It's more flexible that way and can result in even 
fewer allocations than even OutputRange parameters would have 
(i.e. you can have chains of lazy operations and only allocate 
on the final step, or not at all in some cases).


Laziness isn't appropriate or possible everywhere but it's much 
easier to go from lazy to eager than the other way around.



[...]


This is not comparable. Lazy input range based solutions do not 
make it possible to change allocation strategy, they simply defer 
the allocation point. Ideally both are needed.


Re: GCs in the news

2014-07-17 Thread Brad Anderson via Digitalmars-d

On Thursday, 17 July 2014 at 12:37:10 UTC, w0rp wrote:
The key to making D's GC acceptable lies in two factors I 
believe.


1. Improve the implementation enough so that you will only be 
impacted by GC in extermely low memory or real time 
environments.
2. Defer allocation more and more by using ranges and 
algorithms more, and trust that compiler optimisations will 
make these fast.


The big, big offender I believe for extra allocations is 
functions which return objects, rather than functions which 
write to output ranges. The single most common occurence of 
this is something like this is toString. Instead of writing 
this...


string toString() {
// Allocations the user of the library has no control over.
return foo.toString() ~ bar.toString() ~ " something else";
}

I believe you should always, always instead write this.

// I left out the part with different character types.
void writeString(OutputRange)(OutputRange outputRange)
if (isOutputRange!(OutputRange, char)) {
// Allocations controlle by the user of the library,
// this template could appear in a @nogc function.
foo.writeString(outputRange);
bar.writeString(outputRange);

"something else".copy(outputRange);
}



I agreed with this for awhile but following the conversation here 
 I'm 
more inclined to think we should be adding lazy versions of 
functions where possible rather than versions with OutputRange 
parameters. It's more flexible that way and can result in even 
fewer allocations than even OutputRange parameters would have 
(i.e. you can have chains of lazy operations and only allocate on 
the final step, or not at all in some cases).


Laziness isn't appropriate or possible everywhere but it's much 
easier to go from lazy to eager than the other way around.



[...]


Re: GCs in the news

2014-07-17 Thread Kiith-Sa via Digitalmars-d

On Thursday, 17 July 2014 at 19:14:06 UTC, Right wrote:
 I'm rather fond of RAII, I find that I rarely every need 
shared semantics.
 I use a custom object model that allows for weak_ptrs to 
unique_ptrs which I think removes some cases where people might 
otherwise be inclined to use shared_ptr.


 Shared semantics are so rare in fact I would say I hardly use 
it at all, I go for weeks of coding without creating a shared 
type, not because I'm trying to do so, but because it just 
isn't necessary.


 Which is why GC seems like such a waste, given my experience 
in C++, where I hardly need shared memory, I see little use for 
a GC(or even ARC etc), all it will do is decrease program 
performance, make deterministic destruction impossible, and 
prevent automatic cleanup of none memory resources.


 Rust seems to have caught on to what C++ has accomplished here.


 Oh, and Unreal? Yes they have a GC type "UObject", I worked on 
Unreal at one point, my impression was that this originated 
back with the original Unreal(circa 1998?), likely caused by 
the popularity of Java at the time. As for the Unreal code 
base? Pass on that.


UEngine has been rewritten from scratch.
UnrealScript doesn't even exist anymore.
It is the new UEngine that depends on GC, and we're talking C++, 
not UnrealScript here (again, UnrealScript is gone).


Re: GCs in the news

2014-07-17 Thread Andrei Alexandrescu via Digitalmars-d

On 7/17/14, 12:26 PM, Ary Borenszweig wrote:

On 7/17/14, 3:55 PM, Andrei Alexandrescu wrote:

On 7/17/14, 11:11 AM, Ary Borenszweig wrote:

On 7/17/14, 2:32 PM, Right wrote:

  I hate GC, so there.


I see no proof of this. And not everybody hates GCs.

Bye,
bearophile




Java is everywhere and it has a GC. Go is starting to be everywhere and
it has a GC. C# too has a GC, and I think they use it to make games too.
I don't think everyone hates GCs. :-)


http://www.stroustrup.com/C++11FAQ.html#gc-abi

Andrei


Sorry, but I don't understand your reply by just reading that link.


There's work on adding optional GC to C++ starting with C++11. -- Andrei


Re: GCs in the news

2014-07-17 Thread Ary Borenszweig via Digitalmars-d

On 7/17/14, 3:55 PM, Andrei Alexandrescu wrote:

On 7/17/14, 11:11 AM, Ary Borenszweig wrote:

On 7/17/14, 2:32 PM, Right wrote:

  I hate GC, so there.


I see no proof of this. And not everybody hates GCs.

Bye,
bearophile




Java is everywhere and it has a GC. Go is starting to be everywhere and
it has a GC. C# too has a GC, and I think they use it to make games too.
I don't think everyone hates GCs. :-)


http://www.stroustrup.com/C++11FAQ.html#gc-abi

Andrei


Sorry, but I don't understand your reply by just reading that link.


Re: GCs in the news

2014-07-17 Thread Right via Digitalmars-d
 I'm rather fond of RAII, I find that I rarely every need shared 
semantics.
 I use a custom object model that allows for weak_ptrs to 
unique_ptrs which I think removes some cases where people might 
otherwise be inclined to use shared_ptr.


 Shared semantics are so rare in fact I would say I hardly use it 
at all, I go for weeks of coding without creating a shared type, 
not because I'm trying to do so, but because it just isn't 
necessary.


 Which is why GC seems like such a waste, given my experience in 
C++, where I hardly need shared memory, I see little use for a 
GC(or even ARC etc), all it will do is decrease program 
performance, make deterministic destruction impossible, and 
prevent automatic cleanup of none memory resources.


 Rust seems to have caught on to what C++ has accomplished here.


 Oh, and Unreal? Yes they have a GC type "UObject", I worked on 
Unreal at one point, my impression was that this originated back 
with the original Unreal(circa 1998?), likely caused by the 
popularity of Java at the time. As for the Unreal code base? Pass 
on that.




Re: GCs in the news

2014-07-17 Thread Remo via Digitalmars-d

On Thursday, 17 July 2014 at 17:36:36 UTC, Vic wrote:

On Thursday, 17 July 2014 at 13:02:22 UTC, Remo wrote:


The quality of GC implementation is probably more important.



I disagree, I am a burn victim and don't trust smoke.


Well it appears to be very hard to make proper GC.
So all the hate again GC could be because of suboptimal 
implementation?
Any way as written before memory is not only one resource that 
need to be managed. So a language need to offer solution not only 
for memory management but all other resources.

In C++ this is called RAII and work reasonable well.
Rust looks even more promising for me.


Ideally it is optional.


Yes for me too.
GC must be optional.
I hope @nogc will allow this for D.



Cheers,
Vic




Re: GCs in the news

2014-07-17 Thread Andrei Alexandrescu via Digitalmars-d

On 7/17/14, 11:11 AM, Ary Borenszweig wrote:

On 7/17/14, 2:32 PM, Right wrote:

  I hate GC, so there.


I see no proof of this. And not everybody hates GCs.

Bye,
bearophile




Java is everywhere and it has a GC. Go is starting to be everywhere and
it has a GC. C# too has a GC, and I think they use it to make games too.
I don't think everyone hates GCs. :-)


http://www.stroustrup.com/C++11FAQ.html#gc-abi

Andrei


Re: GCs in the news

2014-07-17 Thread Russel Winder via Digitalmars-d
On Thu, 2014-07-17 at 15:11 -0300, Ary Borenszweig via Digitalmars-d
wrote:
[…]
> Java is everywhere and it has a GC. Go is starting to be everywhere and 
> it has a GC. C# too has a GC, and I think they use it to make games too. 
> I don't think everyone hates GCs. :-)

I think we need to try and turn this to a more constructive debate and
the above gives a hook.

The Go thread is coming to the conclusion that they need a better GC
than they currently have. I suspect this will now become a unit of work
and that something good will come of it.

For many years GC in Java has been a bit of a problem; Java relies on
GC, yet the algorithms were always a bit of a compromise and second
rate. However Java now has the G1 garbage collector and there is
evidence and a huge amount of hope that this is actually a turning
point.

Java exhibits the behaviour of having a lot of very short lived objects
so it becomes crucial to be able to deal with object creation as a very
lightweight activity and for very lightweight collection of rapidly
useless objects. Java originally went for a generational GC strategy but
this has always led to problems especially in a multicore context.
Taking an alternative strategy, G1 has seemingly ameliorated a lot of
the problems leading to a system that is not "stop the world", is
multicore and multithread compatible, and works very well such that soft
real time is seemingly not a problem.

I have no data re C#.

With C++ I am coming to grips with RAII management of the heap. With
Java, Groovy, Go and Python I rely on the GC doing a good job. I note
though that there is a lot of evidence that the Unreal folk developed a
garbage collector for C++ exactly because they didn't want to do the
RAII thing.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: GCs in the news

2014-07-17 Thread Dicebot via Digitalmars-d
On Thursday, 17 July 2014 at 18:22:11 UTC, H. S. Teoh via 
Digitalmars-d wrote:
Actually, I've realized that output ranges are really only 
useful when
you want to store the final result. For data in mid-processing, 
you

really want to be exporting an input (or higher) range interface
instead, because functions that take output ranges are not 
composable.
And for storing final results, you just use std.algorithm.copy, 
so
there's really no need for many functions to take an output 
range at

all.


Plain algorithm ranges rarely need to allocate at all so those 
are somewhat irrelevant to the topic. What I am speaking about 
are variety of utility functions like this:


S detab(S)(S s, size_t tabSize = 8)
if (isSomeString!S)

this allocates result string. Proper alternative:

S detab(S)(ref S output, size_t tabSize = 8)
if (isSomeString!S);

plus

void detab(S, OR)(OR output, size_t tab_Size = 8)
if (   isSomeString!S
&& isSomeString!(ElementType!OR)
   )


Re: GCs in the news

2014-07-17 Thread bearophile via Digitalmars-d

H. S. Teoh:

I don't think it will affect existing code (esp. given Walter's 
stance on breaking changes!).


Making various parts of Phobos GC-free doesn't mean that nothing 
GC-allocates, it means that Phobos will offer means to use memory 
provided by the user. There are many situations where using a GC 
is OK, so both kinds of usages should be supported by Phobos. It 
should contain nothrow @nogc functions to format and to convert 
to number and strings. It's a matter of offering choice.


Bye,
bearophile


Re: GCs in the news

2014-07-17 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 17, 2014 at 06:09:49PM +, deadalnix via Digitalmars-d wrote:
> On Thursday, 17 July 2014 at 18:08:18 UTC, Dicebot wrote:
> >On Thursday, 17 July 2014 at 17:58:15 UTC, Chris wrote:
> >>That's good news! See, we're getting there, just bear with us. This
> >>begs the question of course, how will this affect existing code? My
> >>code is string intensive.
> >
> >Usually GC-free API is added by providing new overloads that take an
> >output range instance as an argument so no existing code should break
> >(it will still use allocating versions)
> 
> Yes, output ranges are underused by now.

Actually, I've realized that output ranges are really only useful when
you want to store the final result. For data in mid-processing, you
really want to be exporting an input (or higher) range interface
instead, because functions that take output ranges are not composable.
And for storing final results, you just use std.algorithm.copy, so
there's really no need for many functions to take an output range at
all.


T

-- 
One Word to write them all, One Access to find them, One Excel to count them 
all, And thus to Windows bind them. -- Mike Champion


Re: GCs in the news

2014-07-17 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 17, 2014 at 05:58:14PM +, Chris via Digitalmars-d wrote:
> On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via Digitalmars-d
> wrote:
[...]
> >AFAIK some work still needs to be done with std.string; Walter for
> >one has started some work to implement range-based equivalents for
> >std.string functions, which would be non-allocating; we just need a
> >bit of work to push things through.
> >
> >DMD 2.066 will have @nogc, which will make it easy to discover which
> >remaining parts of Phobos are still not GC-free. Then we'll know
> >where to direct our efforts. :-)
> >
> >
> >T
> 
> That's good news! See, we're getting there, just bear with us. This
> begs the question of course, how will this affect existing code? My
> code is string intensive.

I don't think it will affect existing code (esp. given Walter's stance
on breaking changes!). Probably the old GC-based string functions will
still be around for backwards-compatibility. Perhaps some of them might
be replaced with non-GC versions where it can be done transparently, but
I'd expect you'd need to rewrite your string code to take advantage of
the new range-based stuff. Hopefully the rewrites will be minimal (e.g.,
pass in an output range as argument instead of getting a returned
string, replace allocation-based code with a UFCS chain, etc.). The
ideal scenario may very well be as simple as tacking on
`.copy(myBuffer)` at the end of a UFCS chain. :-P


T

-- 
Genius may have its limitations, but stupidity is not thus handicapped. -- 
Elbert Hubbard


Re: GCs in the news

2014-07-17 Thread Abdulhaq via Digitalmars-d

On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:

On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:



If D came without GC, it would have replaced C++ a long time 
ago!


Agree +1000.

If GC is so good, why not make it an option, have a base lib 
w/o GC.


If I want GC, I got me JRE. It seems that some in D want to 
write a better JRE, and that just won't happen ever.


Cheers,
Vic


I can't think of anyone posting here, to be honest, who wants to 
write a better JRE. The JRE is a virtual machine, and java 
compiles to bytecode that is run on the JVM. On the contrary, and 
in accordance with the core principle that D is a systems 
programming language, D compiles to native and (hopefully) highly 
optimised native machine code. There does exist something of a 
'culture clash' where, by the very nature of GCs, there can be 
not-insignificant pauses in the running of the program that would 
be inimicable to real-time software such as high res complex 
games, operating systems, drivers etc.


The response to this in the forums is either to improve the GC so 
that it doesn't ever pause for more than a certain amount of time 
(e.g. concurrent GCs, remove the global lock so other threads can 
continue to run), or to offer alternative memory management 
approaches such as ARC, which can also have pauses, but at other 
inflections as the program runs.


Personally I'm a bit disappointed that the good work that has 
been done on GCs so far doesn't seem to be being picked up and 
run with, and nor do I see any reasons given as to why that is 
the case. Adnrei was threatening to start another GC an one point 
but unfortunately I haven't seen any more of that and we all know 
how short of time every one seems to be these days.


Also on a personal note, I see some slightly snarky comments 
about D targeting C# and Java. Well from my perspective I'm 
extremely happy with the fact that D is a better C# and a better 
Java. I just wish it had Qt (I must finish my bindings for Qt) 
and/or ran on Android! The GC issues are irrelevant for me.


Re: GCs in the news

2014-07-17 Thread Ary Borenszweig via Digitalmars-d

On 7/17/14, 2:32 PM, Right wrote:

  I hate GC, so there.


I see no proof of this. And not everybody hates GCs.

Bye,
bearophile




Java is everywhere and it has a GC. Go is starting to be everywhere and 
it has a GC. C# too has a GC, and I think they use it to make games too. 
I don't think everyone hates GCs. :-)


Re: GCs in the news

2014-07-17 Thread Dicebot via Digitalmars-d

On Thursday, 17 July 2014 at 17:28:02 UTC, Vic wrote:
If that is true, I may even do a $ bounty to make Phobos GC 
free.


Unless you do some hard real-time barebone stuff it is quite 
likely you can do with limited usage of GC. Hiring some of 
experienced D user to make a one-time case study with detailed 
recommendation can be an option if you are seriously concerned.



I may do the same, $ bounty on vibe.d port to GC free.


vibe.d has -version=VibedManualMemoryManagement which removes 
much of GC usage from its internals. Not 100% @nogc but some 
entry point to start with for interested parties.


I don't know D enough to be able to do that, but good news to 
me.


Here Don mentions some of techniques we (Sociomantic) use to 
minimize GC impact : https://www.youtube.com/watch?v=WmE7ZR1_YKs


In the end it comes to famous Bjarne quote : "C++ may be the best 
language for garbage collection because it generates so few 
garbage". Same can be applied to D with proper coding style.




Re: GCs in the news

2014-07-17 Thread deadalnix via Digitalmars-d

On Thursday, 17 July 2014 at 18:08:18 UTC, Dicebot wrote:

On Thursday, 17 July 2014 at 17:58:15 UTC, Chris wrote:
That's good news! See, we're getting there, just bear with us. 
This begs the question of course, how will this affect 
existing code? My code is string intensive.


Usually GC-free API is added by providing new overloads that 
take an output range instance as an argument so no existing 
code should break (it will still use allocating versions)


Yes, output ranges are underused by now.


Re: GCs in the news

2014-07-17 Thread Dicebot via Digitalmars-d

On Thursday, 17 July 2014 at 17:58:15 UTC, Chris wrote:
That's good news! See, we're getting there, just bear with us. 
This begs the question of course, how will this affect existing 
code? My code is string intensive.


Usually GC-free API is added by providing new overloads that take 
an output range instance as an argument so no existing code 
should break (it will still use allocating versions)


Re: GCs in the news

2014-07-17 Thread Chris via Digitalmars-d
On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Thu, Jul 17, 2014 at 05:28:01PM +, Vic via Digitalmars-d 
wrote:
On Thursday, 17 July 2014 at 17:13:04 UTC, Peter Alexander 
wrote:

>On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:
>>If GC is so good, why not make it an option, have a base lib 
>>w/o GC.

>
>Much of Phobos already is GC free. The parts that aren't 
>should be
>easy to convert to use user-supplied buffers. Please add 
>enhancement
>requests for cases where there isn't a GC-free alternative to 
>a

>standard library routine.

If that is true, I may even do a $ bounty to make Phobos GC 
free.


I may do the same, $ bounty on vibe.d port to GC free.

I don't know D enough to be able to do that, but good news to 
me.

[...]

Over the last year or so, IIRC, there has been a push (a slow 
but
nonetheless steady push) to make as much of Phobos GC-free as 
possible.
I'd say most (all?) of std.algorithm and std.range should be 
GC-free by
now, and probably many of the others can be made GC-free quite 
easily

with the tools that we now have.

AFAIK some work still needs to be done with std.string; Walter 
for one

has started some work to implement range-based equivalents for
std.string functions, which would be non-allocating; we just 
need a bit

of work to push things through.

DMD 2.066 will have @nogc, which will make it easy to discover 
which
remaining parts of Phobos are still not GC-free. Then we'll 
know where

to direct our efforts. :-)


T


That's good news! See, we're getting there, just bear with us. 
This begs the question of course, how will this affect existing 
code? My code is string intensive.


Re: GCs in the news

2014-07-17 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 17, 2014 at 05:28:01PM +, Vic via Digitalmars-d wrote:
> On Thursday, 17 July 2014 at 17:13:04 UTC, Peter Alexander wrote:
> >On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:
> >>If GC is so good, why not make it an option, have a base lib w/o GC.
> >
> >Much of Phobos already is GC free. The parts that aren't should be
> >easy to convert to use user-supplied buffers. Please add enhancement
> >requests for cases where there isn't a GC-free alternative to a
> >standard library routine.
> 
> If that is true, I may even do a $ bounty to make Phobos GC free.
> 
> I may do the same, $ bounty on vibe.d port to GC free.
> 
> I don't know D enough to be able to do that, but good news to me.
[...]

Over the last year or so, IIRC, there has been a push (a slow but
nonetheless steady push) to make as much of Phobos GC-free as possible.
I'd say most (all?) of std.algorithm and std.range should be GC-free by
now, and probably many of the others can be made GC-free quite easily
with the tools that we now have.

AFAIK some work still needs to be done with std.string; Walter for one
has started some work to implement range-based equivalents for
std.string functions, which would be non-allocating; we just need a bit
of work to push things through.

DMD 2.066 will have @nogc, which will make it easy to discover which
remaining parts of Phobos are still not GC-free. Then we'll know where
to direct our efforts. :-)


T

-- 
Elegant or ugly code as well as fine or rude sentences have something in
common: they don't depend on the language. -- Luca De Vitis


Re: GCs in the news

2014-07-17 Thread H. S. Teoh via Digitalmars-d
On Thu, Jul 17, 2014 at 05:32:36PM +, Right via Digitalmars-d wrote:
>  I hate GC, so there.
> 
> >I see no proof of this. And not everybody hates GCs.
[...]

I don't, so here. :D


T

-- 
I see that you JS got Bach.


Re: GCs in the news

2014-07-17 Thread Vic via Digitalmars-d

On Thursday, 17 July 2014 at 13:02:22 UTC, Remo wrote:


The quality of GC implementation is probably more important.



I disagree, I am a burn victim and don't trust smoke.

Ideally it is optional.

Cheers,
Vic


Re: GCs in the news

2014-07-17 Thread Right via Digitalmars-d

 I hate GC, so there.


I see no proof of this. And not everybody hates GCs.

Bye,
bearophile




Re: GCs in the news

2014-07-17 Thread Vic via Digitalmars-d

On Thursday, 17 July 2014 at 17:13:04 UTC, Peter Alexander wrote:

On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:
If GC is so good, why not make it an option, have a base lib 
w/o GC.


Much of Phobos already is GC free. The parts that aren't should 
be easy to convert to use user-supplied buffers. Please add 
enhancement requests for cases where there isn't a GC-free 
alternative to a standard library routine.


If that is true, I may even do a $ bounty to make Phobos GC free.

I may do the same, $ bounty on vibe.d port to GC free.

I don't know D enough to be able to do that, but good news to me.

Cheers,
Vic


Re: GCs in the news

2014-07-17 Thread bearophile via Digitalmars-d

Vic:

If D came without GC, it would have replaced C++ a long time 
ago!


Agree +1000.


I see no proof of this. And not everybody hates GCs.

Bye,
bearophile


Re: GCs in the news

2014-07-17 Thread Peter Alexander via Digitalmars-d

On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:
If GC is so good, why not make it an option, have a base lib 
w/o GC.


Much of Phobos already is GC free. The parts that aren't should 
be easy to convert to use user-supplied buffers. Please add 
enhancement requests for cases where there isn't a GC-free 
alternative to a standard library routine.


Re: GCs in the news

2014-07-17 Thread Vic via Digitalmars-d

On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:



If D came without GC, it would have replaced C++ a long time 
ago!


Agree +1000.

If GC is so good, why not make it an option, have a base lib w/o 
GC.


If I want GC, I got me JRE. It seems that some in D want to write 
a better JRE, and that just won't happen ever.


Cheers,
Vic


Re: GCs in the news

2014-07-17 Thread thedeemon via Digitalmars-d

On Thursday, 17 July 2014 at 12:37:10 UTC, w0rp wrote:

For improving the GC to an acceptable level, I believe 
collection only needs to execute fast enough such that it will 
fit within a frame comfortably. So for something rendering at 
60FPS you have 1 second / 60 frames ~= 16.6 milliseconds of 
computation you can do without resulting in a single dropped 
frame. That means you need to get collection down to something 
in the 1ms to 2ms region.


That's easy, just make sure your heap never grows over 0.4 MB.
Seriously, 200 MB of small object in heap = 1 second.
That's how bad it is now.
And here Walter says it won't get much better. Ever.
http://www.reddit.com/r/programming/comments/2avdod/dconf_2014_realtime_big_data_in_d_by_don_clugston/



Re: GCs in the news

2014-07-17 Thread Andrei Alexandrescu via Digitalmars-d

On 7/17/14, 2:57 AM, currysoup wrote:

On Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:

On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via
Digitalmars-d wrote:

It appears still to be a general meme that performance required no GC
and GC mean poor performance. The debate has been restarted on the Go
mailing list under the banner "go without garbage collector". The
response to will Go remove the garbage collector was somewhat
unequivocal: nope.


That's good news in a way. If a big company accepts GC and the Go
crowd go with it (pardon the pun), then it will find more acceptance
(as Paulo pointed out in a different thread).


It's not about "acceptance", it's about the reality that a GC is not a
universal solution to memory management.

Just from watching a few of the DConf 2014 talks, if you want
performance you avoid the GC at all costs (even if that means allocating
into huge predefined buffers).


Not at all costs! warp creates a little litter during e.g. command line 
preprocessing and other inconsequential tasks. The core of it is careful 
to not allocate frequently in inner loops.



Once you're going to these lengths to
avoid garbage collection it begs the question, why are you even using
this language? Within this community the question is rhetorical but to
outsiders I feel it's a major concern.


I agree there's a perception issue.


Andrei



Re: GCs in the news

2014-07-17 Thread Chris via Digitalmars-d

On Thursday, 17 July 2014 at 15:19:59 UTC, bachmeier wrote:

On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:

On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:
It's not about "acceptance", it's about the reality that a GC 
is not a universal solution to memory management.


Just from watching a few of the DConf 2014 talks, if you want 
performance you avoid the GC at all costs (even if that means 
allocating into huge predefined buffers). Once you're going 
to these lengths to avoid garbage collection it begs the 
question, why are you even using this language? Within this 
community the question is rhetorical but to outsiders I feel 
it's a major concern.



If D came without GC, it would have replaced C++ a long time 
ago!


The only thing that would have been replaced is the complaints 
that D has a garbage collector with complaints that D doesn't 
have the tools and existing libraries of C++. If C++ users were 
sincere in their claims that they really want to use D, they'd 
have disabled the garbage collector and used it.


I think the GC issue is eating resources that would be better 
spent elsewhere.


+1


Re: GCs in the news

2014-07-17 Thread bachmeier via Digitalmars-d

On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:

On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:
It's not about "acceptance", it's about the reality that a GC 
is not a universal solution to memory management.


Just from watching a few of the DConf 2014 talks, if you want 
performance you avoid the GC at all costs (even if that means 
allocating into huge predefined buffers). Once you're going to 
these lengths to avoid garbage collection it begs the 
question, why are you even using this language? Within this 
community the question is rhetorical but to outsiders I feel 
it's a major concern.



If D came without GC, it would have replaced C++ a long time 
ago!


The only thing that would have been replaced is the complaints 
that D has a garbage collector with complaints that D doesn't 
have the tools and existing libraries of C++. If C++ users were 
sincere in their claims that they really want to use D, they'd 
have disabled the garbage collector and used it.


I think the GC issue is eating resources that would be better 
spent elsewhere.


Re: GCs in the news

2014-07-17 Thread Chris via Digitalmars-d

On Thursday, 17 July 2014 at 14:05:02 UTC, Brian Rogoff wrote:

On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:
If D came without GC, it would have replaced C++ a long time 
ago!


That's overly optimistic I think, but I believe that the 
adoption rate would have been far greater for a D without GC, 
or perhaps with a more GC friendly design, as the GC comes up 
first or close in every D discussion with prospective adopters.


However, it's way too late to change that now. IMO, the way 
forward involves removing all or most hidden allocations from 
the D libraries, making programming sans GC easier (@nogc 
everywhere, a compiler switch, documentation for how to work 
around the lack of GC, etc.) and a much better, precise GC as 
part of the D release. Any spec changes necessary to support 
precision should be in a fast path.


Yeah. Best avoid GC in the first place. If GC can stop the world 
for ~250ms, wouldn't it be possible (just an innocent thought) to 
tell the GC only to work, if it can guarantee to stay below a 
certain threshold, and do the rest later (or in a parallel 
thread)?


Re: GCs in the news

2014-07-17 Thread Araq via Digitalmars-d
I feel it is a major concern, if I'm starting a project with 
low latency requirements* I certainly think twice about using 
D. I think this could apply especially to people outside the 
community who might not have experienced the benefits D 
provides. The issue is not there is a GC, it's that the GC is 
viewed as bad. If the GC was as good as Azul's C4 GC then D 
would be perfect. I'm not sure if D's memory model supports 
such a collector though.


It doesn't.


Re: GCs in the news

2014-07-17 Thread Brian Rogoff via Digitalmars-d

On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:
If D came without GC, it would have replaced C++ a long time 
ago!


That's overly optimistic I think, but I believe that the adoption 
rate would have been far greater for a D without GC, or perhaps 
with a more GC friendly design, as the GC comes up first or close 
in every D discussion with prospective adopters.


However, it's way too late to change that now. IMO, the way 
forward involves removing all or most hidden allocations from the 
D libraries, making programming sans GC easier (@nogc everywhere, 
a compiler switch, documentation for how to work around the lack 
of GC, etc.) and a much better, precise GC as part of the D 
release. Any spec changes necessary to support precision should 
be in a fast path.







Re: GCs in the news

2014-07-17 Thread eles via Digitalmars-d

On Thursday, 17 July 2014 at 13:30:15 UTC, currysoup wrote:

On Thursday, 17 July 2014 at 11:15:10 UTC, Chris wrote:


*According to Don Clugston's talk the default GC can pause for 
~250ms which is totally insane for any kind of interactive or 
near-real-time system. If their concurrent version of the GC 
could reduce this to 10ms it shows the GC implementation is 
fairly naive.


The sequencer that I use executes a loop every 10 ms.


Re: GCs in the news

2014-07-17 Thread currysoup via Digitalmars-d

On Thursday, 17 July 2014 at 11:15:10 UTC, Chris wrote:


Don't know if it's really a "major concern" or the favorite 
weak spot that C++ et. al guys like to flog to death in order 
to distract from the many strengths that D has (in comparison 
with C++ et al.) The answer is always "D has GC, it's the 
Devil, don't touch it!" Also, let's put a little faith in the 
brilliant developers behind D, I'm sure there's a huge 
performance boost for D around the corner.


I'm not here to hate on D, the reason I read these forums is 
because I love the language.


I feel it is a major concern, if I'm starting a project with low 
latency requirements* I certainly think twice about using D. I 
think this could apply especially to people outside the community 
who might not have experienced the benefits D provides. The issue 
is not there is a GC, it's that the GC is viewed as bad. If the 
GC was as good as Azul's C4 GC then D would be perfect. I'm not 
sure if D's memory model supports such a collector though.


*According to Don Clugston's talk the default GC can pause for 
~250ms which is totally insane for any kind of interactive or 
near-real-time system. If their concurrent version of the GC 
could reduce this to 10ms it shows the GC implementation is 
fairly naive.


Re: GCs in the news

2014-07-17 Thread John via Digitalmars-d

On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:
It's not about "acceptance", it's about the reality that a GC 
is not a universal solution to memory management.


Just from watching a few of the DConf 2014 talks, if you want 
performance you avoid the GC at all costs (even if that means 
allocating into huge predefined buffers). Once you're going to 
these lengths to avoid garbage collection it begs the question, 
why are you even using this language? Within this community the 
question is rhetorical but to outsiders I feel it's a major 
concern.



If D came without GC, it would have replaced C++ a long time ago!


Re: GCs in the news

2014-07-17 Thread Remo via Digitalmars-d
On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via 
Digitalmars-d wrote:
It appears still to be a general meme that performance required 
no GC
and GC mean poor performance. The debate has been restarted on 
the Go
mailing list under the banner "go without garbage collector". 
The

response to will Go remove the garbage collector was somewhat
unequivocal: nope.


GC or no GC is that the right question ?

The quality of GC implementation is probably more important.

"Simpler and faster GC for Go"
https://docs.google.com/document/d/1v4Oqa0WwHunqlb8C3ObL_uNQw3DfSY-ztoA-4wWbKcg/pub

Another point that will be ignored in such debates is that GC 
gives solution for only one problem, memory management.

How about other resources, how to manage them ?


Re: GCs in the news

2014-07-17 Thread w0rp via Digitalmars-d

The key to making D's GC acceptable lies in two factors I believe.

1. Improve the implementation enough so that you will only be 
impacted by GC in extermely low memory or real time environments.
2. Defer allocation more and more by using ranges and algorithms 
more, and trust that compiler optimisations will make these fast.


The big, big offender I believe for extra allocations is 
functions which return objects, rather than functions which write 
to output ranges. The single most common occurence of this is 
something like this is toString. Instead of writing this...


string toString() {
// Allocations the user of the library has no control over.
return foo.toString() ~ bar.toString() ~ " something else";
}

I believe you should always, always instead write this.

// I left out the part with different character types.
void writeString(OutputRange)(OutputRange outputRange)
if (isOutputRange!(OutputRange, char)) {
// Allocations controlle by the user of the library,
// this template could appear in a @nogc function.
foo.writeString(outputRange);
bar.writeString(outputRange);

"something else".copy(outputRange);
}

It's perhaps strange at first because you're pre-programmed from 
other languages, except maybe C++ which uses output streams, to 
always be allocating temporary objects everywhere, even if all 
you are doing is writing them to an object.


For improving the GC to an acceptable level, I believe collection 
only needs to execute fast enough such that it will fit within a 
frame comfortably. So for something rendering at 60FPS you have 1 
second / 60 frames ~= 16.6 milliseconds of computation you can do 
without resulting in a single dropped frame. That means you need 
to get collection down to something in the 1ms to 2ms region. At 
which point collection time will only impact something which is 
really pushing the hardware, which would exclude most mobile 
video games, which are about the complexity of Angry Birds.


I firmly believe there's no silver bullet for automatic memory 
management. Reference counting solutions, including automatic 
reference counting, will consume less memory than a garbage 
collector and offer more predictable collection times, but do so 
at the expense of memory safety and simplicity. You need fatter 
pointers to manage the reference counts, and you need to 
carefully deal with reference cycles.


In addition, you cannot easily share slices of memory with 
reference counting, which is an advantage of garbage collection. 
With GC, you can allocate a string, slice a part of it, hand over 
the slice to some other object, and you know that the slice will 
stay around for as long as it's needed. With reference counting, 
you have to either retain the slice and the whole segment in the 
same way and allow for the possibility of hidden cycles, or 
disallow slicing and create copies instead. Slicing in GC is 
important, because you can create much more efficient programs 
which take slices based on regex, which we do right now.


For the environments which cannot tolerate collection whatsoever, 
like Sociomantic's real time bidding operations, then control of 
allocation will have to be left to the user. This is where the 
zero allocation idea behind ranges and algorithms comes into 
play, because then the code which doesn't allocate, which could 
potentially be all of std.algorithm, can still be used in those 
environments, rather than being rendered unusable.


There's my thoughts on it anyway. I probably rambled on too long.


Re: GCs in the news

2014-07-17 Thread Paulo Pinto via Digitalmars-d

On Thursday, 17 July 2014 at 11:15:10 UTC, Chris wrote:

On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:

On Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:
On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via 
Digitalmars-d wrote:
It appears still to be a general meme that performance 
required no GC
and GC mean poor performance. The debate has been restarted 
on the Go
mailing list under the banner "go without garbage 
collector". The

response to will Go remove the garbage collector was somewhat
unequivocal: nope.


That's good news in a way. If a big company accepts GC and 
the Go crowd go with it (pardon the pun), then it will find 
more acceptance (as Paulo pointed out in a different thread).


It's not about "acceptance", it's about the reality that a GC 
is not a universal solution to memory management.


Point taken. But as has been said before 90-95% of all apps can 
live happily with GC, and if you want, you can still go bare 
metal with D. The security GC offers should not be 
underestimated either. With "acceptance" I meant that people 
see "it cannot be that bad after all for *most* applications". 
The GC issue is often cited as a D-eal breaker. I understand 
that there are applications that need total control over the 
memory. But those apps have always been programmed in C or any 
other close-to-the-machine language, and even then programmers 
(in gaming for example) have to use additional tricks and hacks 
to squeeze out every little bit of performance. What D has to 
do is to facilitate control over the memory, but I still 
consider it a systems programming language due to the fact that 
it has many things to offer as regard the direct interaction 
with the machine that Java and C# don't. Can you write a device 
drive in Java, if yes, tell me how, I'm interested.


Easy, like in any language that offers FFI.

Expose a Driver class with native method declarations, whose 
implementation is written in Assembly.


The SquakVM used to drive SunSPOT devices had the device drivers 
written in Java.


There are quite a few other examples in the embedded market, like 
the MicroEJ platform.


That is no different from writing drivers in ANSI C, which 
provides zero features for hardware interaction.


--
Paulo


Re: GCs in the news

2014-07-17 Thread Chris via Digitalmars-d

On Thursday, 17 July 2014 at 11:15:10 UTC, Chris wrote:

On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:

On Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:
On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via 
Digitalmars-d wrote:
It appears still to be a general meme that performance 
required no GC
and GC mean poor performance. The debate has been restarted 
on the Go
mailing list under the banner "go without garbage 
collector". The

response to will Go remove the garbage collector was somewhat
unequivocal: nope.


That's good news in a way. If a big company accepts GC and 
the Go crowd go with it (pardon the pun), then it will find 
more acceptance (as Paulo pointed out in a different thread).


It's not about "acceptance", it's about the reality that a GC 
is not a universal solution to memory management.


Point taken. But as has been said before 90-95% of all apps can 
live happily with GC, and if you want, you can still go bare 
metal with D. The security GC offers should not be 
underestimated either. With "acceptance" I meant that people 
see "it cannot be that bad after all for *most* applications". 
The GC issue is often cited as a D-eal breaker. I understand 
that there are applications that need total control over the 
memory. But those apps have always been programmed in C or any 
other close-to-the-machine language, and even then programmers 
(in gaming for example) have to use additional tricks and hacks 
to squeeze out every little bit of performance. What D has to 
do is to facilitate control over the memory, but I still 
consider it a systems programming language due to the fact that 
it has many things to offer as regard the direct interaction 
with the machine that Java and C# don't. Can you write a device 
drive in Java, if yes, tell me how, I'm interested.


Just from watching a few of the DConf 2014 talks, if you want 
performance you avoid the GC at all costs (even if that means 
allocating into huge predefined buffers). Once you're going to 
these lengths to avoid garbage collection it begs the 
question, why are you even using this language? Within this 
community the question is rhetorical but to outsiders I feel 
it's a major concern.


Don't know if it's really a "major concern" or the favorite 
weak spot that C++ et. al guys like to flog to death in order 
to distract from the many strengths that D has (in comparison 
with C++ et al.) The answer is always "D has GC, it's the 
Devil, don't touch it!" Also, let's put a little faith in the 
brilliant developers behind D, I'm sure there's a huge 
performance boost for D around the corner.


Ah, and there's inline asm too!


  1   2   >