On Mon, Feb 17, 2014 at 7:39 PM, john skaller
<[email protected]> wrote:
>> autoconf is great,
>
> I beg to differ: IMHO its a heap of rubbish.
> The whole concept is extremely bad. It's "suck it and see" concept
> hides issues and can't work in a cross compilation context, or even
> with multiple compilers (as I have).
>
> Autoconf is also very annoying when it fails because to build
> or upgrade a package you end up having to build or upgrade
> autoconf too.
>
> The way forward is standardisation.
Indeed, standardization is better, but something important to realize is
that autoconf is designed explicitly for the cases when standardization
_falls down_. If everything was fully POSIX compliant and published how it
works via pkg-config or somesuch then there would be no need for a
configuration manager, autoconf exists because that is not the case. So
saying standardization is the answer is fully correct, it's just not the
situation we have or have to deal with. It does mean my configuratoin
scripts get smaller and simpler and can even be elided sometimes now that I
can rely on a better ecosystem.
But when I have to configure and build something written 10 years ago
stuffed on a CD-ROM and never touched since, or something written now on a
15 year old system then there isn't much other choice. And these cases come
up, autoconf fills in the corner cases the original author never thought
about and certainly doesn't want to.
Don't get me wrong, there are _many_ things that can be improved about
autoconf, the macro based language and no insulation between the input
language (macro sh) and output (sh) limits its portability in many ways.
>> those things you don't appreciate how tricky it is to do portably until you
>> come accross a non-autoconfed package you need to install.
>
> You don't appreciate how utterly bad autoconf is until you have
> to modify a build to make it work and you get presented  with
> a totally impenetrable mess. Even the help is a heap of crap,
> letting you a whole lot of non-interesting rubbish and NOT telling
> you the things you actually need to know for your specific package.
Everything is easy once you know how to do it, including autoconf/automake.
And you are right that autoconf is a crappy macro based language, but what
is important is the functionality, it is very good at doing what it does.
Actually, I wouldn't say good, lots to be improved there, but there are no
replacements that actually do what it does.
> What you have to understand is  the most code doesn't
> require ANY configuration. Something like Judy, for example,
> only needs ONE configuration parameter (32 or 64 bits).
Yes, but does it need the ability to compile for a 32 bit
arm architecture on a 64 bit intel host? a standardized 'install' and
'uninstall' mechanism that takes things like different permissions into
account (install.sh exists for a reason, it is needed on some architectures)
_and_ work on operating systems the author of the library has never even
heard of let alone can test on? Configuration is only a small part of what
autoconf/automake does.
Heck, right now I am using a very gutted subset of Judy1 arrays on a 16 bit
system, probably not something the original authors ever envisioned.
> I have a compiler with fairly extensive libraries that
> provides platform independent stuff for most services
> including async socket I/O, and some configuration is
> certainly necessary. Surprising little though.
>
> Autoconf has one (and only one) legitimate use:
> building low level system tools (binutils) and of course
> your compiler (gcc, clang, etc).
>
> After that its the *compilers* job to provide a standard API.
I agree, but again, autoconf is explicitly and exactly for
those cases where standardization cannot be relied on. It is about making
your library work, whatever it takes, on whatever system it is thrown onto.
There is no _elegant_ solution to this problem as it is inherently meant to
work in an _inelegant_ system.
And standardization of the langauge does not imply the system has a POSIX
compliant 'grep' or that binaries should still go in /usr/local/bin or
any of that meta-information. autoconf is as much about the system around
the language (like, the options passed to your compiler for instance) than
the actual language. I have rarely needed an autoconf #define inside of a code
file but it is needed for figuring out the right options to the c compiler,
where to install the program, etc. Most of the changes to the code can be
acomplished by filling in any standardized APIs it doesn't support. like
generating a local stdint.h file or a non broken snprintf if it doesn't
exist on the system.
It is pretty vacuous and trivially true to say "the solution to autoconf is
to live in a world where autoconf isn't needed" because that is not the
world we all live in, working towards it is great. It is a major effort of
mine, but it doesn't mean I can pretend I live in it.
Don't get me wrong, I'd love a replacement for autoconf that _actually
replicates autoconfs functionality_. Every replacement I have seen pretty
much just replicates the subset of functionality the author has
encountered.
>> I always assumed the odd API was to conform to some internal HP coding
>> standard created by someone who has a history with algol (hence the in/out
>> arguments) I've seen odder ones forced on programmers out there.
>
> The C API is not odd, its perfect. Every argument is more or less
> exactly as it should be (give or take a casting which is the inevitable
> result of using crap languages like C).
Yes, which is why I get rid of the need for casting and use standardized
types with my interface and make it almost to use the arguments in the
incorrect position. As in, I fix the issue you raise and others that have
bugged me too. I can't just shrug and say C is crap when I find something
broken, I fix it if I can to make my life easier in the future.
> The MACRO API is crap. It hides where you need to use
> an lvalue and where an rvalue will do. That's bad idea in C
> (and an even worse one in C++).
Yup. I stopped using it after it fell down on simple idioms like for loops.
>> another is by using actual structures the compiler can do alias analysis.
>> for instance
>>
>> foo(judyl_t *j, int *x) {
>> Â if (jEmpty(*j))
>> Â Â blah;
>> Â *x = 0;
>> Â if (jEmpty(*j))
>> Â Â bar;
>> }
>>
>> Since judyl_t and int are different types, the compiler can assume that x
>> does not point to the same memory location as j, meaning it can combine both
>> jEmpty calls into one caching the result and removing a load. It would not
>> be able to do this with Pvoid_t values without explicitly checking if x == j.
>
> Unfortunately, alias tests is compilers are really dangerous.
> Because the type system is so weak people cast left right
> and centre. In my compiler I do it systematically and I have
> to put -fno-strict-aliasing. I wish I don't have to do this but
> unfortunately C just doesn't have a good enough type system
> or semantics to avoid it sometimes ;(
Yeah, I had issues with that in jhcs back end for a while too. I was able to
eventually fix it by using variants of the struct method like I use here but
there were pitfalls along the way.
John
------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
_______________________________________________
Judy-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/judy-devel