On 06/22/2012 08:50 AM, illusionoflife wrote:
> On Thursday, June 21, 2012 13:39:07 you wrote:
>>> IIRC, that was to allow the URL-extraction portion of wget to be built
>>> stand-alone, so that it would create a tool that just extract URLs and
>>> spit them out, and not as part of some wget run.
>> Gah. This of course applies to the STANDALONE for html-parse.
>> Presumably, the one in netrc allows it to extract netrc info and then
>> spit it out, or something.
> 
> Well, but we still can remove code duplicate. 
> Or we are trying to spare little bytes and moving all duplicates to single 
> library is not apporiate?
> 

We certainly could. Could move the implementation to its own thing, or
something. But I guess all the standalone stuff expects to be run as a
single "gcc -o standalone some-file.c" thing, or something (which I
doubt would even work right now).

What it really amounts to, I suspect, is bitrot. Nobody is testing this
stuff, nobody is supporting this stuff, and I doubt it even works
properly right now, especially with as closely-tied to gnulib as we are
now. Probably, the maintainer should make a decision either to actively
support these standalone tools, and include them as part of testing, or
else to remove all the #ifdef STANDALONE code in there, so that it's not
misleading anybody one way or the other. :)

-mjc

Reply via email to