pure D jpeg encoder, public domain

2016-06-17 Thread ketmar via Digitalmars-d-announce
as a complement to jpeg decoder, here[1] is jpeg encoder from the 
same author. surprisingly, it is more than two times smaller; i 
took a look at it and decided: why, we should have both!


author claims that is supports baseline grayscale and RGB jpegs. 
i tested it on some images i have, but no heavily testing was 
done. it *should* work, though, as it is straightforward port.


it is completely independent of decoder, and self-contained.

unlicense, the same as encoder. yep, i know, i know, i'm very 
sorry. fork it and relicense!



[1] http://repo.or.cz/iv.d.git/blob_plain/HEAD:/jpege.d


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread ketmar via Digitalmars-d-announce

On Friday, 17 June 2016 at 23:17:56 UTC, Xinok wrote:

On Friday, 17 June 2016 at 22:15:47 UTC, ketmar wrote:
i put it under unlicense[1], as some other works of the same 
author is using it, and it is basically the same PD.


[1] http://unlicense.org/


Unfortunately, using unlicense is just as problematic as using 
public domain:


https://programmers.stackexchange.com/questions/147111/what-is-wrong-with-the-unlicense


alas, that is all i can do without breaking the "spirit" of the 
original terms. i'm ok with it, and people still can fork the 
code and relicense in under Boost/MIT.


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread Xinok via Digitalmars-d-announce

On Friday, 17 June 2016 at 22:15:47 UTC, ketmar wrote:
i put it under unlicense[1], as some other works of the same 
author is using it, and it is basically the same PD.


[1] http://unlicense.org/


Unfortunately, using unlicense is just as problematic as using 
public domain:


https://programmers.stackexchange.com/questions/147111/what-is-wrong-with-the-unlicense

The next best thing is the CC0 license (Creative Commons Zero) 
which is better written than unlicense but it's currently not 
recommended for software / source code.


http://copyfree.org/content/standard/licenses/cc0/license.txt

After that, the most-open licenses with good legal standing would 
be Boost and MIT but then you run into the same issues again with 
incompatible licenses.


I don't have any recommendations but I thought it was worth 
pointing out that unlicense isn't the solution here.


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread ketmar via Digitalmars-d-announce

On Friday, 17 June 2016 at 13:35:58 UTC, John Colvin wrote:

On Friday, 17 June 2016 at 13:05:47 UTC, ketmar wrote:
finally, the thing you all waited for years is here! pure D 
no-frills JPEG decoder with progressive JPEG support! Public 
Domain! one file! no Phobos or other external dependecies! it 
even has some DDoc! grab it[1] now while it's hot!


[1] http://repo.or.cz/iv.d.git/blob_plain/HEAD:/jpegd.d


awesome.

Without wanting to start a huge thing about this, see 
http://linuxmafia.com/faq/Licensing_and_Law/public-domain.html 
and http://www.rosenlaw.com/lj16.htm and please at least add an 
optional licencing under a traditional permissive open-source 
license (boost would be nice, who knows, maybe phobos should 
have jpeg support?).


i put it under unlicense[1], as some other works of the same 
author is using it, and it is basically the same PD.


[1] http://unlicense.org/


Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread jmh530 via Digitalmars-d-announce

On Monday, 30 May 2016 at 19:16:50 UTC, Jason White wrote:


Note that this is still a ways off from being production-ready. 
It needs some polishing. Feedback would be most appreciated 
(file some issues!). I really want to make this one of the best 
build systems out there.




I found the beginning of the tutorial very clear. I really liked 
that it can produce a png of the build graph. I also liked the 
Lua build description for DMD. Much more legible than the make 
file.


However, once I got to the "Going Meta: Building the Build 
Description" section of the tutorial, I got a little confused.


I found it a little weird that the json output towards the end of 
the tutorial don't always match up. Like, where did the .h files 
go from the inputs? (I get that they aren't needed for running 
gcc, but you should mention that) Why is it displaying cc instead 
of gcc? I just feel like you might be able to split things up a 
little and provide a few more details. Like, this is how to do a 
base version, then say this is how you can customize what is 
displayed. Also, it's a little terse on the details of things 
like what the cc.binary is doing. Always err on the side of 
explaining things too much rather than too little, IMO.


Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jun 17, 2016 at 07:30:42PM +, Fool via Digitalmars-d-announce wrote:
> On Friday, 17 June 2016 at 08:23:50 UTC, Atila Neves wrote:
> > I agree, but CMake/ninja, tup, regga/ninja, reggae/binary are all
> > correct _and_ fast.
> 
> 'Correct' referring to which standards? There is an interesting series
> of blog posts by Mike Shal:
> 
> http://gittup.org/blog/2014/03/6-clobber-builds-part-1---missing-dependencies/
> http://gittup.org/blog/2014/05/7-clobber-builds-part-2---fixing-missing-dependencies/
> http://gittup.org/blog/2014/06/8-clobber-builds-part-3---other-clobber-causes/
> http://gittup.org/blog/2015/03/13-clobber-builds-part-4---fixing-other-clobber-causes/

To me, "correct" means:

- After invoking the build tool, the workspace *always* reflects a
  valid, reproducible build. Regardless of initial conditions, existence
  or non-existence of intermediate files, stale files, temporary files,
  or other detritus. Independent of environmental factors. Regardless of
  whether a previous build invocation was interrupted in the middle --
  the build system should be able to continue where it left off,
  reproduce any partial build products, and produce exactly the same
  products, bit for bit, as if it had not been interrupted before.

- If anything changes -- and I mean literally ANYTHING -- that might
  cause the build products to be different in some way, the build tool
  should detect that and update the affected targets accordingly the
  next time it's invoked.  "Anything" includes (but is not limited to):

   - The contents of source files, even if the timestamp stays
 identical to the previous version.

   - Change in compiler flags, or any change to the build script itself;

   - A new version of the compiler was installed on the system;

   - A system library was upgraded / a new library was installed that
 may get picked up at link time;

   - Change in environment variables that might cause some of the build
 commands to work differently (yes I know this is a bad thing -- it
 is not recommended to have your build depend on this, but the point
 is that if it does, the build tool ought to detect it).

   - Editing comments in a source file (what if there's a script that
 parses comments? Or ddoc?);

   - Reverting a patch (that may leave stray source files introduced by
 the patch).

   - Interrupting a build in the middle -- the build system should be
 able to detect any partially-built products and correctly rebuild
 them instead of picking up a potentially corrupted object in the
 next operation in the pipeline.

- As much as is practical, all unnecessary work should be elided. For
  example:

   - If I edit a comment in a source file, and there's an intermediate
 compile stage where an object file is produced, and the object file
 after the change is identical to the one produced by the previous
 compilation, then any further actions -- linking, archiving, etc.
 -- should not be done, because all products will be identical.

   - More generally, if my build consists of source file A, which gets
 compiled to intermediate product B, which in turn is used to
 produce final product C, then if A is modified, the build system
 should regenerate B. But if the new B is identical to the old B,
 then C should *not* be regenerated again.

  - Contrariwise, if modifications are made to B, the build system
should NOT use the modified B to generate C; instead, it should
detect that B is out-of-date w.r.t. A, and regenerate B from A
first, and then proceed to generate C if it would be different
from before.

   - Touching the timestamp of a source file or intermediate file should
 *not* cause the build system to rebuild that target, if the result
 will actually be bit-for-bit identical with the old product.

   - In spite of this work elision, the build system should still ensure
 that the final build products are 100% reproducible. That is, work
 is elided if and only if it is actually unnecessary; if a comment
 change actually causes something to change (e.g., ddocs are
 different now), then the build system must rebuild all affected
 subsequent targets.

- Assuming that a revision control system is in place, and a workspace
  is checked out on revision X with no further modifications, then
  invoking the build tool should ALWAYS, without any exceptions, produce
  exactly the same outputs, bit for bit.  I.e., if your workspace
  faithfully represents revision X in the RCS, then invoking the build
  tool will produce the exact same binary products as anybody else who
  checks out revision X, regardless of their initial starting
  conditions.

   - E.g., I may be on revision Y, then I run svn update -rX, and there
 may be stray intermediate files strewn around my workspace that are
 not in a fresh checkout of revision X, the build tool should still
 

Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread Fool via Digitalmars-d-announce

On Friday, 17 June 2016 at 08:23:50 UTC, Atila Neves wrote:
I agree, but CMake/ninja, tup, regga/ninja, reggae/binary are 
all correct _and_ fast.


'Correct' referring to which standards? There is an interesting 
series of blog posts by Mike Shal:


http://gittup.org/blog/2014/03/6-clobber-builds-part-1---missing-dependencies/
http://gittup.org/blog/2014/05/7-clobber-builds-part-2---fixing-missing-dependencies/
http://gittup.org/blog/2014/06/8-clobber-builds-part-3---other-clobber-causes/
http://gittup.org/blog/2015/03/13-clobber-builds-part-4---fixing-other-clobber-causes/


Re: dlang-requests 0.1.7 released

2016-06-17 Thread ikod via Digitalmars-d-announce
On Tuesday, 14 June 2016 at 14:59:37 UTC, Andrei Alexandrescu 
wrote:

On 6/11/16 7:03 PM, ikod wrote:

Hello,

Dlang-requests is library created under influence of 
Python-requests,

with primary goal of easy to use and performance.


...


Thanks! Does the project have a dub presence? How does it 
compare feature-wise and speed-wise with curl? -- Andrei


Hello,

Finally, I made some improvements and run minimal performance 
tests against command-line curl. I wrote simple code for file 
download using dlang-requests, run it and curl for the same 
urls(httpbin server on my notebook) and compare "total", 
"system", and "user" time for different cases. You can find 
numbers and code below.
So my conclusion is - performances are comparable for these 
cases, but there is some field for improvement in dlang-requests.


Case 1 - 50Mb of random data, no any encoding
Case 2 - 50Mb of random data, transfer chunked
Case 3 - 50Mb of random data, transfer chunked, content gzip

  measured times, sec
-
 |   user|   system  |   total
 Case|---|---|---
 | d-r | curl|  d-r| curl| d-r | curl
-|-|-|-|-|-|-
  1  | 0.17| 0.14| 0.20| 0.32| 51.7| 52.2
  2  | 0.19| 0.11| 0.15| 0.21| 51.8| 51.9
  3  | 0.21| 0.15| 0.11| 0.15| 51.5| 52.1


import std.stdio;
import requests;

pragma(lib, "ssl");
pragma(lib, "crypto");

void main()
{
auto sink = File("/dev/null", "wb");
auto rq = Request();
rq.useStreaming = true;
auto rs = 
rq.get("http://127.0.0.1:8080/stream-bytes/5120;);

auto stream = rs.receiveAsRange();
while(!stream.empty) {
sink.rawWrite(stream.front);
stream.popFront;
}
}



Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread Dicebot via Digitalmars-d-announce
On 06/17/2016 06:20 PM, H. S. Teoh via Digitalmars-d-announce wrote:
>> If you happen to be unlucky enough to work on a project so large you
>> need to watch the file system, then use the tup backend I guess.
> [...]
> 
> Yes, I'm pretty sure that describes a lot of software projects out there
> today. The scale of software these days is growing exponentially, and
> there's no sign of it slowing down.  Or maybe that's just an artifact of
> the field I work in? :-P

Server-side domain is definitely getting smaller beause micro-service
hype keeps growing (and that is one of hypes I do actually support btw).


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread ag0aep6g via Digitalmars-d-announce

On 06/17/2016 04:08 PM, Kagamin wrote:

Uh oh, a license is revokable? What happens when boost license is revoked?


No, it's not, but you can publish stuff under multiple licenses at the 
same time.


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread Andrei Alexandrescu via Digitalmars-d-announce

On 06/17/2016 09:05 AM, ketmar wrote:

finally, the thing you all waited for years is here! pure D no-frills
JPEG decoder with progressive JPEG support! Public Domain! one file! no
Phobos or other external dependecies! it even has some DDoc! grab it[1]
now while it's hot!

[1] http://repo.or.cz/iv.d.git/blob_plain/HEAD:/jpegd.d


https://www.reddit.com/r/programming/comments/4oj7ja/public_domain_jpeg_decoder_with_progressive/

Andrei


Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jun 17, 2016 at 09:00:45AM +, Atila Neves via 
Digitalmars-d-announce wrote:
> On Friday, 17 June 2016 at 06:18:28 UTC, H. S. Teoh wrote:
> > On Fri, Jun 17, 2016 at 05:41:30AM +, Jason White via
> > Digitalmars-d-announce wrote: [...]
> > > Where Make gets slow is when checking for changes on a ton of
> > > files.  I haven't tested it, but I'm sure Button is faster than
> > > Make in this case because it checks for changed files using
> > > multiple threads.  Using the file system watcher can also bring
> > > this down to a near-zero time.
> > 
> > IMO using the file system watcher is the way to go. It's the only
> > way to beat the O(n) pause at the beginning of a build as the build
> > system scans for what has changed.
> 
> See, I used to think that, then I measured. tup uses fuse for this and
> that's exactly why it's fast. I was considering a similar approach
> with the reggae binary backend, and so I went and timed make, tup,
> ninja and itself on a synthetic project. Basically I wrote a program
> to write out source files to be compiled, with a runtime parameter
> indicating how many source files to write.
> 
> The most extensive tests I did was on a synthetic project of 30k
> source files. That's a lot bigger than the vast majority of developers
> are ever likely to work on. As a comparison, the 2.6.11 version of the
> Linux kernel had 17k files.

Today's software projects are much bigger than you seem to imply. For
example, my work project *includes* the entire Linux kernel as part of
its build process, and the size of the workspace is dominated by the
non-Linux components. So 30k source files isn't exactly something
totally far out.


> A no-op build on my laptop was about (from memory):
> 
> tup: <1s
> ninja, binary: 1.3s
> make: >20s
> 
> It turns out that just stat'ing everything is fast enough for pretty
> much everybody, so I just kept the simple algorithm. Bear in mind the
> Makefiles here were the simplest possible - doing anything that
> usually goes on in Makefileland would have made it far, far slower. I
> know: I converted a build system at work from make to hand-written
> ninja and it no-op builds went from nearly 2 minutes to 1s.

Problem: stat() isn't good enough when network file sharing is involved.
It breaks correctness by introducing heisenbugs caused by (sometimes
tiny) differences in local hardware clocks. It also may break if two
versions of the same file share the same timestamp (often thought
impossible, but quite possible with machine-generated files and a
filesystem that doesn't have subsecond resolution -- and it's rare
enough that when it does happen people are left scratching their heads
for many wasted hours).   To guarantee correctness you need to compute a
digest of file contents, not just timestamp.


> If you happen to be unlucky enough to work on a project so large you
> need to watch the file system, then use the tup backend I guess.
[...]

Yes, I'm pretty sure that describes a lot of software projects out there
today. The scale of software these days is growing exponentially, and
there's no sign of it slowing down.  Or maybe that's just an artifact of
the field I work in? :-P


T

-- 
Never step over a puddle, always step around it. Chances are that whatever made 
it is still dripping.


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread ketmar via Digitalmars-d-announce

On Friday, 17 June 2016 at 14:33:41 UTC, ketmar wrote:
ah, just fork it and slap Boost license on top! i myself have 
no objections, and i doubt that the original author will object 
too.


p.s. i'm pretty sure that somebody *will* fork it soon to get it 
to code.dlang.org. i won't do that myself, but again, i have no 
objections.


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread ketmar via Digitalmars-d-announce

On Friday, 17 June 2016 at 14:28:52 UTC, Rory McGuire wrote:
Thanks for that info. I don't think it would help if ketmar 
made it MIT / Boost licensed or any other, if the original 
authors relatives chose to dispute the license it the fact that 
the code is based on the PD code would make it hard to protect.


ah, just fork it and slap Boost license on top! i myself have no 
objections, and i doubt that the original author will object too.


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread Rory McGuire via Digitalmars-d-announce
On Fri, Jun 17, 2016 at 3:35 PM, John Colvin via Digitalmars-d-announce <
digitalmars-d-announce@puremagic.com> wrote:

> On Friday, 17 June 2016 at 13:05:47 UTC, ketmar wrote:
>
>> finally, the thing you all waited for years is here! pure D no-frills
>> JPEG decoder with progressive JPEG support! Public Domain! one file! no
>> Phobos or other external dependecies! it even has some DDoc! grab it[1] now
>> while it's hot!
>>
>> [1] http://repo.or.cz/iv.d.git/blob_plain/HEAD:/jpegd.d
>>
>
> awesome.
>
> Without wanting to start a huge thing about this, see
> http://linuxmafia.com/faq/Licensing_and_Law/public-domain.html and
> http://www.rosenlaw.com/lj16.htm and please at least add an optional
> licencing under a traditional permissive open-source license (boost would
> be nice, who knows, maybe phobos should have jpeg support?).
>

Thanks for that info. I don't think it would help if ketmar made it MIT /
Boost licensed or any other, if the original authors relatives chose to
dispute the license it the fact that the code is based on the PD code would
make it hard to protect.

I think that source code under PD might get exception to the laws in those
articles because of the way PD is used globally and what its intent is, and
what our common understanding of it is. However that would probably go to
court to settle.


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread ketmar via Digitalmars-d-announce
On Friday, 17 June 2016 at 13:51:29 UTC, Andrei Alexandrescu 
wrote:
Nice, thanks for this work. I see it has 3213 lines. I take it 
the source is https://github.com/richgel999/jpeg-compressor. 
How many lines from there are reflected in the D code? -- Andrei


it's a complete port of jpegd.h+jpegd.cpp (so, no encoder). it is 
almost 1:1 to c++ code, including fancy templated row/col 
decoders and 4x4 matrix mini-class. mostly sed work, and after i 
made it to compile (and fixed silly bug in CLAMP that i 
introduced) it "just works". i replaced stream reader class with 
delegate (we have such a great delegates in D, so let's use 'em! 
;-), but otherwise the code is unmodified.


ah, i also put `.ptr` to array access to skip bounds checking -- 
i love to build my code with bounds checking on, and i don't feel 
that i need it in this decoder -- it should be fairly well-tested.


so you may assume that all of the lines there are came from c++ 
(sans some curly brackets).


of course, one can do much better work by writing "idiomatic" D 
code, i guess, but that would be much greater work -- not a 
"port", but "rewrite".


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread ketmar via Digitalmars-d-announce

On Friday, 17 June 2016 at 13:35:58 UTC, John Colvin wrote:
Without wanting to start a huge thing about this, see 
http://linuxmafia.com/faq/Licensing_and_Law/public-domain.html 
and http://www.rosenlaw.com/lj16.htm and please at least add an 
optional licencing under a traditional permissive open-source 
license (boost would be nice, who knows, maybe phobos should 
have jpeg support?).


ah, i know about PD caveats. but the original source was PD, so i 
don't feel like adding any other license on top of it will be 
good. not that it is legally impossible, i just want to keep it 
as the original author intended. after all, anybody can just fork 
it and add any license he wants. it is unlikely that the thing 
will get extensive upgrades anyway. ;-)


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread Kagamin via Digitalmars-d-announce

On Friday, 17 June 2016 at 13:35:58 UTC, John Colvin wrote:
Without wanting to start a huge thing about this, see 
http://linuxmafia.com/faq/Licensing_and_Law/public-domain.html 
and http://www.rosenlaw.com/lj16.htm and please at least add an 
optional licencing under a traditional permissive open-source 
license (boost would be nice, who knows, maybe phobos should 
have jpeg support?).


Uh oh, a license is revokable? What happens when boost license is 
revoked?


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread John Colvin via Digitalmars-d-announce

On Friday, 17 June 2016 at 13:05:47 UTC, ketmar wrote:
finally, the thing you all waited for years is here! pure D 
no-frills JPEG decoder with progressive JPEG support! Public 
Domain! one file! no Phobos or other external dependecies! it 
even has some DDoc! grab it[1] now while it's hot!


[1] http://repo.or.cz/iv.d.git/blob_plain/HEAD:/jpegd.d


awesome.

Without wanting to start a huge thing about this, see 
http://linuxmafia.com/faq/Licensing_and_Law/public-domain.html 
and http://www.rosenlaw.com/lj16.htm and please at least add an 
optional licencing under a traditional permissive open-source 
license (boost would be nice, who knows, maybe phobos should have 
jpeg support?).


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread Andrei Alexandrescu via Digitalmars-d-announce

On 06/17/2016 09:05 AM, ketmar wrote:

finally, the thing you all waited for years is here! pure D no-frills
JPEG decoder with progressive JPEG support! Public Domain! one file! no
Phobos or other external dependecies! it even has some DDoc! grab it[1]
now while it's hot!

[1] http://repo.or.cz/iv.d.git/blob_plain/HEAD:/jpegd.d


Nice, thanks for this work. I see it has 3213 lines. I take it the 
source is https://github.com/richgel999/jpeg-compressor. How many lines 
from there are reflected in the D code? -- Andrei


pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread ketmar via Digitalmars-d-announce
finally, the thing you all waited for years is here! pure D 
no-frills JPEG decoder with progressive JPEG support! Public 
Domain! one file! no Phobos or other external dependecies! it 
even has some DDoc! grab it[1] now while it's hot!


[1] http://repo.or.cz/iv.d.git/blob_plain/HEAD:/jpegd.d


Re: Beta release DUB 1.0.0-beta.1

2016-06-17 Thread Sönke Ludwig via Digitalmars-d-announce

Am 17.06.2016 um 13:06 schrieb mark_mcs:

I'm not sure if this is a defect or a conscious decision so I thought
I'd ask the question first. Is there a reason why Dub on Windows uses
the APPDATA environment variable, rather than LOCALAPPDATA? The APPDATA
variable points to the roaming profile directory which means that my
entire Dub cache is uploaded when I log out, then downloaded again when
I log back in. Should I raise a github issue for this? Seems like an
easy fix for a 1.0.0 release.


It currently stores both, the configuration and cached packages in the 
same folder, while it should put the configuration in APPDATA and the 
cached packages in LOCALAPPDATA (so it's indeed a defect). It's an easy 
fix, but too late in the release process now. It could go into 1.0.1, 
though.


Re: Beta release DUB 1.0.0-beta.1

2016-06-17 Thread mark_mcs via Digitalmars-d-announce

On Tuesday, 7 June 2016 at 09:54:19 UTC, Sönke Ludwig wrote:
DUB 1.0.0 is nearing completion. The new feature over 0.9.25 is 
support for single-file packages, which can be used to write 
shebang-style scripts on Posix systems:


#!/usr/bin/env dub
/++ dub.sdl:
name "colortest"
dependency "color" version="~>0.0.3"
+/

void main()
{
import std.stdio : writefln;
import std.experimental.color.conv;
import std.experimental.color.hsx;
import std.experimental.color.rgb;

auto yellow = RGB!("rgb", float)(1.0, 1.0, 0.0);
writefln("Yellow in HSV: %s", 
yellow.convertColor!(HSV!()));

}

With "chmod +x" it can then simply be run as ./colortest.d.

Apart from that, the release contains some bug fixes, most 
notably it doesn't query the registry for each build any more.


Full change log:
https://github.com/D-Programming-Language/dub/blob/master/CHANGELOG.md

Download (Latest Preview):
http://code.dlang.org/download


I'm not sure if this is a defect or a conscious decision so I 
thought I'd ask the question first. Is there a reason why Dub on 
Windows uses the APPDATA environment variable, rather than 
LOCALAPPDATA? The APPDATA variable points to the roaming profile 
directory which means that my entire Dub cache is uploaded when I 
log out, then downloaded again when I log back in. Should I raise 
a github issue for this? Seems like an easy fix for a 1.0.0 
release.


Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread Dicebot via Digitalmars-d-announce
However, I question the utility of even doing this in the first 
place. You miss out on the convenience of using the existing 
command line interface. And for what? Just so everything can be 
in D? Writing the same thing in Lua would be much prettier. I 
don't understand this dependency-phobia.


It comes from knowing that for most small to average size D 
projects you don't need a build _tool_ at all. If full clean 
build takes 2 seconds, installing extra tool to achieve the same 
thing one line shell script does is highly annoying.


Your reasoning about makefiles seems to be flavored by C++ 
realities. But my typical D makefile would look like something 
this:


build:
dmd -ofbinary `find ./src`

test:
dmd -unittest -main `find ./src`

deploy: build test
scp ./binary server:

That means that I usually care neither about correctness nor 
about speed, only about good cross-platform way to define 
pipelines. And for that fetching dedicated tool is simply too 
discouraging.


In my opinion that is why it is so hard to take over make place 
for any new tool - they all put too much attention into 
complicated projects but to get self-sustained network effect one 
has to prioritize small and simple projects. And ease of 
availability is most important there.


Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread Kagamin via Digitalmars-d-announce

On Friday, 17 June 2016 at 04:54:37 UTC, Jason White wrote:

Why the build script can't have a command line interface?


It could, but now the build script is a more complicated and 
for little gain.


It's only as complicated to implement required features and not 
more complicated. If the command line interface is not needed, it 
can be omitted, example:

---
import button;
auto Build = ...
mixin mainBuild!Build; //no CLI
---

Adding command line options on top of that to configure the 
build would be painful.


$ rdmd build.d configure [options]

Well, if one wants to go really complex, a prebuilt binary can be 
provided to help with that, but it's not always needed I think.


It would be simpler and cleaner to write a D program to 
generate the JSON build description for Button to consume. Then 
you can add a command line interface to configure how the build 
description is generated. This is how the Lua build 
descriptions work[1].


---
import button;
auto Build = ...
mixin mainBuildJSON!Build;
---
Should be possible to work like lua script.


Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread Atila Neves via Digitalmars-d-announce

On Friday, 17 June 2016 at 06:18:28 UTC, H. S. Teoh wrote:
On Fri, Jun 17, 2016 at 05:41:30AM +, Jason White via 
Digitalmars-d-announce wrote: [...]
Where Make gets slow is when checking for changes on a ton of 
files. I haven't tested it, but I'm sure Button is faster than 
Make in this case because it checks for changed files using 
multiple threads. Using the file system watcher can also bring 
this down to a near-zero time.


IMO using the file system watcher is the way to go. It's the 
only way to beat the O(n) pause at the beginning of a build as 
the build system scans for what has changed.


See, I used to think that, then I measured. tup uses fuse for 
this and that's exactly why it's fast. I was considering a 
similar approach with the reggae binary backend, and so I went 
and timed make, tup, ninja and itself on a synthetic project. 
Basically I wrote a program to write out source files to be 
compiled, with a runtime parameter indicating how many source 
files to write.


The most extensive tests I did was on a synthetic project of 30k 
source files. That's a lot bigger than the vast majority of 
developers are ever likely to work on. As a comparison, the 
2.6.11 version of the Linux kernel had 17k files.


A no-op build on my laptop was about (from memory):

tup: <1s
ninja, binary: 1.3s
make: >20s

It turns out that just stat'ing everything is fast enough for 
pretty much everybody, so I just kept the simple algorithm. Bear 
in mind the Makefiles here were the simplest possible - doing 
anything that usually goes on in Makefileland would have made it 
far, far slower. I know: I converted a build system at work from 
make to hand-written ninja and it no-op builds went from nearly 2 
minutes to 1s.


If you happen to be unlucky enough to work on a project so large 
you need to watch the file system, then use the tup backend I 
guess.


Atila



Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread Atila Neves via Digitalmars-d-announce

On Friday, 17 June 2016 at 05:41:30 UTC, Jason White wrote:

On Thursday, 16 June 2016 at 13:39:20 UTC, Atila Neves wrote:
It would be a worthwhile trade-off, if those were the only two 
options available, but they're not. There are multiple build 
systems out there that do correct builds whilst being faster 
than make. Being faster is easy, because make is incredibly 
slow.


I didn't even find out about ninja because I read about it in 
a blog post, I actively searched for a make alternative 
because I was tired of waiting for it.


Make is certainly not slow for full builds. That is what I was 
testing.


I only care about incremental builds. I actually have difficulty 
understanding why you tested full builds, they're utterly 
uninteresting to me.


A build system  can be amazeballs fast, but if you can't rely 
on it doing incremental builds correctly in production, then 
you're probably doing full builds every single time. Being easy 
to use and robust is also pretty important.


I agree, but CMake/ninja, tup, regga/ninja, reggae/binary are all 
correct _and_ fast.


Atila


Re: Berlin D Meetup June 2016

2016-06-17 Thread Stefan Koch via Digitalmars-d-announce

On Wednesday, 8 June 2016 at 16:31:47 UTC, Ben Palmer wrote:

Hi All,

The June Berlin D Meetup will be happening at 20:00 (note new 
time) on Friday the 17th of June at Berlin Co-Op 
(http://co-up.de/) on the fifth floor.


Danny Arends will be giving a more detailed version of his 
lightning talk he gave at the D conference on his web server, 
"DaNode".


Both alcoholic and non-alcoholic drinks will be available.

More details and an abstract of the talk are available on the 
meetup page here: 
http://www.meetup.com/Berlin-D-Programmers/events/231746496/


Thanks,
Ben.


I just missed my train. I'll try to be there at 20:30


Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread Jason White via Digitalmars-d-announce

On Friday, 17 June 2016 at 06:18:28 UTC, H. S. Teoh wrote:
For me, correctness is far more important than speed. Mostly 
because at my day job, we have a Make-based build system and 
because of Make's weaknesses, countless hours, sometimes even 
days, have been wasted running `make clean; make` just so we 
can "be sure".  Actually, it's worse than that; the "official" 
way to build it is:


svn diff > /tmp/diff
\rm -rf old_checkout
mkdir new_checkout
cd new_checkout
svn co http://svnserver/path/to/project
patch -p0 because we have been bitten before by `make clean` not *really* 
cleaning *everything*, and so `make clean; make` was actually 
producing a corrupt image, whereas checking out a fresh new 
workspace produces the correct image.


Far too much time has been wasted "debugging" bugs that weren't 
really there, just because Make cannot be trusted to produce 
the correct results. Or heisenbugs that disappear when you 
rebuild from scratch. Unfortunately, due to the size of our 
system, a fresh svn checkout on a busy day means 15-20 mins 
(due to everybody on the local network trying to do fresh 
checkouts!), then make takes about 30-45 mins to build 
everything.  When your changeset touches Makefiles, this could 
mean a 1 hour turnaround for every edit-compile-test cycle, 
which is ridiculously unproductive.


Such unworkable turnaround times, of course, causes people to 
be lazy and just run tests on incremental builds (of unknown 
correctness), which results in people checking in changesets 
that are actually wrong but just happen to work when they were 
testing on an incremental build (thanks to Make picking up 
stray old copies of obsolete libraries or object files or other 
such detritus). Which means *everybody*'s workspace breaks 
after running `svn update`. And of course, nobody is sure 
whether it broke because of their own changes, or because 
somebody checked in a bad changeset; so it's `make clean; make` 
time just to "be sure". That's n times how many man-hours (for 
n = number of people on the team) straight down the drain, 
where had the build system actually been reliable, only the 
person responsible would have to spend a few extra hours to fix 
the problem.


Make proponents don't seem to realize how a seemingly 
not-very-important feature as build correctness actually adds 
up to a huge cost in terms of employee productivity, i.e., 
wasted hours, AKA wasted employee wages for the time spent 
watching `make clean; make` run.


I couldn't agree more! Correctness is by far the most important 
feature of a build system. Second to that is probably being able 
to make sense of what is happening.


I have the same problems as you in my day job, but magnified. 
Some builds take 3+ hours, some nearly 24 hours, and none of the 
developers can run full builds themselves because the build 
process is so long and complicated. Turn-around time to test 
changes is abysmal and everyone is probably orders of magnitude 
more unproductive because of it. All of this because we can't 
trust Make or Visual Studio to do incremental builds correctly.


I hope to change that with Button.


Re: Button: A fast, correct, and elegantly simple build system.

2016-06-17 Thread H. S. Teoh via Digitalmars-d-announce
On Fri, Jun 17, 2016 at 05:41:30AM +, Jason White via 
Digitalmars-d-announce wrote:
[...]
> Where Make gets slow is when checking for changes on a ton of files. I
> haven't tested it, but I'm sure Button is faster than Make in this
> case because it checks for changed files using multiple threads. Using
> the file system watcher can also bring this down to a near-zero time.

IMO using the file system watcher is the way to go. It's the only way to
beat the O(n) pause at the beginning of a build as the build system
scans for what has changed.


> Speed is not the only virtue of a build system. A build system can be
> amazeballs fast, but if you can't rely on it doing incremental builds
> correctly in production, then you're probably doing full builds every
> single time. Being easy to use and robust is also pretty important.
[...]

For me, correctness is far more important than speed. Mostly because at
my day job, we have a Make-based build system and because of Make's
weaknesses, countless hours, sometimes even days, have been wasted
running `make clean; make` just so we can "be sure".  Actually, it's
worse than that; the "official" way to build it is:

svn diff > /tmp/diff
\rm -rf old_checkout
mkdir new_checkout
cd new_checkout
svn co http://svnserver/path/to/project
patch -p0