Re: Die GNU autotools

2011-01-12 Thread Nadav Har'El
On Wed, Jan 12, 2011, Elazar Leibovich wrote about "Re: Die GNU autotools":
> 3) Have a project in a separate directory which implements an efficient map
> in a siusing existing map implementation. For this project, it makes sense
> to use autotools. But this project will provide a solid testable interface
> (ie .hpp file) for this map.

So basically, you're proposing that one big project with a complex autoconf
setup is split into many different subprojects (libraries), each with its
own autoconf setup? This suggestion may be valid in certain cases, but
you're still ending up using autoconf - perhaps even more of it...

At the end, some piece of code needs to check, at build time, whether the
system's compiler and libraries already include a map implementation, or
they don't. Autoconf (and its heirs to the throne, mentioned by others on
this thread) does this pretty well.

> This way, my main project have a straightforward build system, and the
> portability layer is isolated from the project, testable and more portable
> (what if we'll have a platform with buggy std::map implementation we can't
> use? In case of (3) we can easily borrow one, and change the portability
> layer only. What would the ng-fakeroot project do for such a system?).

What do you do if you have a realtively small project (not something like
OpenOffice) and you find yourself with 7 different "portability layers"
for all sorts of small features like this - do you really prefer having
7 different libraries, 7 build systems, 7 different "separate projects",
and so on, compared to what Shachar described - simple #ifdefs that solve the
problem?

> What about the case of inotify/Linux fragmented audio system, etc, etc.?
> Again, the autosuite won't save the day here. If I want to write a portable
> program which uses inotify (which is kind of oxymoron, as inotify is a Linux
> specific API), what I really want is to program to a layer of abstraction
> above inotify, which is portable to as many system as possible. If I don't

You're basically saying the same thing I am, but reaching a different
conclusion. Indeed, a portable program can't rely on inotify because that
is only available on Linux, and only certain versions thereof. When inotify
is not available, it needs to resort to other mechanisms which might be
available, and when all else fails, resort to slow-and-ugly polling.
Certainly, and sensible program design will wrap all these options into one
function or library inside this project, and use this function throughout
the projects. Still, when compiling this function, somebody needs to choose
whether to compile inotify code, or one of the other options - and this
somebody is autoconf (or one of its competitors). If you compile this
function-or-library separately, basically nothing changes - you still need
to make decisions at compile time.

> would encourage me to write an ad-hoc half-broken portability layer with m4,

Unless you need to invent completely new kinds of tests - and in 99% of the
cases you don't - you don't actually need to learn m4 to use autoconf.
I do agree that m4 is ugly and was an unfortunate choice.


-- 
Nadav Har'El| Thursday, Jan 13 2011, 8 Shevat 5771
n...@math.technion.ac.il |-
Phone +972-523-790466, ICQ 13349191 |Love doesn't make the world go round -
http://nadav.harel.org.il   |love is what makes the ride worthwhile.

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Tzafrir Cohen
On Thu, Jan 13, 2011 at 12:22:21AM +0200, Elazar Leibovich wrote:
> On Thu, Jan 13, 2011 at 12:03 AM, Tzafrir Cohen wrote:
> 
> > > 1) Just use std::map, and ignore the performance penalty, no one will
> > > probably notice.
> >
> > People do notice. This does not work in real life (in too many cases).
> >
> 
> I really think that in most cases CPU is not the bottleneck, but of course
> it can be false for a specific application. This is not the main point here.
> 
> 
> >
> > > 2) Copy an efficient map implementation from somewhere (as in
> > copy-paste),
> > > eg strinx.sf.net, and use it.
> >
> > This leads you to having duplicated useless code in your tree.
> >
> 
> Again, from a purity POV this is bad, since you could add this duplicated
> code as a library and it would be more sound. However from practical POV,
> it'll work, it's reduce dependencies of the project, it makes it more
> portable, and it's good enough.
> 
> 
> >
> > > 3) Have a project in a separate directory which implements an efficient
> > map
> > > in a siusing existing map implementation. For this project, it makes
> > sense
> > > to use autotools. But this project will provide a solid testable
> > interface
> > > (ie .hpp file) for this map.
> >
> > So how can I pass parameters to the configure script?
> >
> 
> You don't. That's the whole point. In 99.9% of the cases, you don't want to
> pass parameters to the configure script. A portable efficient std::map
> implementation is an example where you should design your program so that no
> parameters would be needed. the autosuite will figure out which platform are
> you in, and will compile the appropriate files for this platform. But your
> main logic wouldn't need the autosuite at all, the main logic of your code
> would only depend on the portability layer.

But it's a system (or user-installed) library. Why would I need to bundle
it with my code?

-- 
Tzafrir Cohen | tzaf...@jabber.org | VIM is
http://tzafrir.org.il || a Mutt's
tzaf...@cohens.org.il ||  best
tzaf...@debian.org|| friend

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Oron Peled
On Tuesday, 11 בJanuary 2011 16:35:08 Nadav Har'El wrote:
> On Tue, Jan 11, 2011, Shlomi Fish wrote about "Re: Die GNU autotools":
> > m4 is very vile.
> If I would write it myself, the language I would choose is Tcl - because
> of its "{ ... }" quoting making it easy to include verbatim shell blurbs.

Nadav:
1. In m4 the quote characters are user defined. In very old autoconf
   implementations, they used the default (a ` and a ')
   Modern autoconf uses by default a more visible pair
   of quote characters (a [ and a ] )

2. Actullay, because m4 is just a macro processor, anything in your
   configure.ac file which is not an autoconf macro *IS* a verbatim
   shell script -- the only need for delimited shell blurbs is when they
   are passed as parameters to macros (indeed it's the common use
   case).

Shlomi:
3. While I can see the "learning curve" problem you point to,
I cannot see how the CMake syntax is any improvement
   over the widely known bourne-shell syntax for writing tests.

   For standard tests that aleady have prepare macros, the syntax
   should be nice in almost any language you pick.
   Predefined example from autoconf (for libsdl):
  AM_PATH_SDL
   or if you want a minimum version:
  AM_PATH_SDL([1.2])
   And than in your Makefile.in (or Makefile.am if you want automake)
   simply add $(SDL_CFLAGS) to compile commands and $(SDL_LIBS)
   to link commands.

Night'

-- 
Oron Peled Voice: +972-4-8228492
o...@actcom.co.il  http://users.actcom.co.il/~oron
It's not the software that's free; it's you.
- billyskank on Groklaw
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Elazar Leibovich
On Thu, Jan 13, 2011 at 12:03 AM, Tzafrir Cohen wrote:

> > 1) Just use std::map, and ignore the performance penalty, no one will
> > probably notice.
>
> People do notice. This does not work in real life (in too many cases).
>

I really think that in most cases CPU is not the bottleneck, but of course
it can be false for a specific application. This is not the main point here.


>
> > 2) Copy an efficient map implementation from somewhere (as in
> copy-paste),
> > eg strinx.sf.net, and use it.
>
> This leads you to having duplicated useless code in your tree.
>

Again, from a purity POV this is bad, since you could add this duplicated
code as a library and it would be more sound. However from practical POV,
it'll work, it's reduce dependencies of the project, it makes it more
portable, and it's good enough.


>
> > 3) Have a project in a separate directory which implements an efficient
> map
> > in a siusing existing map implementation. For this project, it makes
> sense
> > to use autotools. But this project will provide a solid testable
> interface
> > (ie .hpp file) for this map.
>
> So how can I pass parameters to the configure script?
>

You don't. That's the whole point. In 99.9% of the cases, you don't want to
pass parameters to the configure script. A portable efficient std::map
implementation is an example where you should design your program so that no
parameters would be needed. the autosuite will figure out which platform are
you in, and will compile the appropriate files for this platform. But your
main logic wouldn't need the autosuite at all, the main logic of your code
would only depend on the portability layer.


> This means you'll spend much work on a portability layer. Which you were
> trying to avoid. But hey, you don't use autohell.
>

I'm not sure I follow you here:
1) I'm not sure why moving the platform dependent code to a small separate
project will make you waste more time on it, than building it inside your
main project.
2) I tried to make clear that when writing a portability layer it makes
sense to use the autosuite in order to figure out where you are, so I'm not
sure I follow your last (cynical?) comment involving the term "autohell".


> What you described above is an ad-hoc portability layer written in shell
> scripts and Makefiles.
>

Again, I'm still not sure I follow you. I didn't recommend using shell
scripts and makefiles in order to write the portability layer, the autosuite
makes perfect sense there. My point it, it shouldn't appear in any other
code than the portability layer, where the autotools configuration is a part
of the logic.
(Indeed, if you don't have something like pnotify, you have to write it (or
at least part of it) yourself for your portable inotify project, but this is
true regardless of your usage of the autosuite).
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Tzafrir Cohen
On Wed, Jan 12, 2011 at 10:37:34PM +0200, Elazar Leibovich wrote:
> On Wed, Jan 12, 2011 at 4:52 PM, Shachar Shemesh wrote:
> >
> > fakeroot-ng prefers to use std::unordered_map, if it's available. If not,
> > __fnu_cxx::hash_map would do. If even that's not available, it would
> > grudgingly use std::map, even though that one has worse performance
> > characteristics.
> >
> > The test in its configure.ac is rather largish (not huge, mind you - 50
> > lines including the test for what flags are required to actually make gcc
> > support C++0x), config.h has the following lines (or similar):
> >
> >> /* The type for the map class */
> >> #define MAP_CLASS std::unordered_map
> >>
> >> /* The file to include in order to get the map class */
> >> #define MAP_INCLUDE 
> >>
> >
> >
> This is an excellent example to explain the preferred approach for software
> portability.
> 
> The basic philosophy here is, program to the interface and not to the
> implementation, and the autosuite encourage you to build a fragile ad-hoc
> duck-type-like interface instead of building a robust one.
> 
> What we're lacking here, is an efficient map implementation that conforms to
> the std::map interface. So what I would do in the order of preference in
> this situation is:
> 
> 1) Just use std::map, and ignore the performance penalty, no one will
> probably notice.

People do notice. This does not work in real life (in too many cases).

> 2) Copy an efficient map implementation from somewhere (as in copy-paste),
> eg strinx.sf.net, and use it.

This leads you to having duplicated useless code in your tree.

> 3) Have a project in a separate directory which implements an efficient map
> in a siusing existing map implementation. For this project, it makes sense
> to use autotools. But this project will provide a solid testable interface
> (ie .hpp file) for this map.

So how can I pass parameters to the configure script?

(And yes, automake has something for that)

> 
> This way, my main project have a straightforward build system, and the
> portability layer is isolated from the project, testable and more portable
> (what if we'll have a platform with buggy std::map implementation we can't
> use? In case of (3) we can easily borrow one, and change the portability
> layer only. What would the ng-fakeroot project do for such a system?).
> 
> In the described project, the portability layer (efficient map conforming
> std::map interface) is coupled with the project who uses it (fakeroot-ng),
> and it is not IMHO a desirable design. Even if we can't design around
> autotools, we need to minimize isolate the piece of code which must use it.

This means you'll spend much work on a portability layer. Which you were
trying to avoid. But hey, you don't use autohell.

> 
> Recommending the usage of autotools in every project (and not, say, in
> project whose only goal are to be used as portability layer etc.) encourage
> you to program to the implementation.
> 
> What about the case of inotify/Linux fragmented audio system, etc, etc.?
> Again, the autosuite won't save the day here. If I want to write a portable
> program which uses inotify (which is kind of oxymoron, as inotify is a Linux
> specific API), what I really want is to program to a layer of abstraction
> above inotify, which is portable to as many system as possible. If I don't
> have anything like pnotify [*] - the autosuite won't really help me. It
> would encourage me to write an ad-hoc half-broken portability layer with m4,
> which is an invitation for troubles. It will certainly won't help me being
> portable. If I must write a portable software which uses diverse API such as
> inotify, and there's no known portability layer, then I should implement my
> own portability layer as a separate project, test it on it's own, and then I
> can be safe and avoid any autotools in my main project.

What you described above is an ad-hoc portability layer written in shell
scripts and Makefiles.


-- 
Tzafrir Cohen | tzaf...@jabber.org | VIM is
http://tzafrir.org.il || a Mutt's
tzaf...@cohens.org.il ||  best
tzaf...@debian.org|| friend

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Elazar Leibovich
On Wed, Jan 12, 2011 at 4:52 PM, Shachar Shemesh wrote:
>
> fakeroot-ng prefers to use std::unordered_map, if it's available. If not,
> __fnu_cxx::hash_map would do. If even that's not available, it would
> grudgingly use std::map, even though that one has worse performance
> characteristics.
>
> The test in its configure.ac is rather largish (not huge, mind you - 50
> lines including the test for what flags are required to actually make gcc
> support C++0x), config.h has the following lines (or similar):
>
>> /* The type for the map class */
>> #define MAP_CLASS std::unordered_map
>>
>> /* The file to include in order to get the map class */
>> #define MAP_INCLUDE 
>>
>
>
This is an excellent example to explain the preferred approach for software
portability.

The basic philosophy here is, program to the interface and not to the
implementation, and the autosuite encourage you to build a fragile ad-hoc
duck-type-like interface instead of building a robust one.

What we're lacking here, is an efficient map implementation that conforms to
the std::map interface. So what I would do in the order of preference in
this situation is:

1) Just use std::map, and ignore the performance penalty, no one will
probably notice.
2) Copy an efficient map implementation from somewhere (as in copy-paste),
eg strinx.sf.net, and use it.
3) Have a project in a separate directory which implements an efficient map
in a siusing existing map implementation. For this project, it makes sense
to use autotools. But this project will provide a solid testable interface
(ie .hpp file) for this map.

This way, my main project have a straightforward build system, and the
portability layer is isolated from the project, testable and more portable
(what if we'll have a platform with buggy std::map implementation we can't
use? In case of (3) we can easily borrow one, and change the portability
layer only. What would the ng-fakeroot project do for such a system?).

In the described project, the portability layer (efficient map conforming
std::map interface) is coupled with the project who uses it (fakeroot-ng),
and it is not IMHO a desirable design. Even if we can't design around
autotools, we need to minimize isolate the piece of code which must use it.

Recommending the usage of autotools in every project (and not, say, in
project whose only goal are to be used as portability layer etc.) encourage
you to program to the implementation.

What about the case of inotify/Linux fragmented audio system, etc, etc.?
Again, the autosuite won't save the day here. If I want to write a portable
program which uses inotify (which is kind of oxymoron, as inotify is a Linux
specific API), what I really want is to program to a layer of abstraction
above inotify, which is portable to as many system as possible. If I don't
have anything like pnotify [*] - the autosuite won't really help me. It
would encourage me to write an ad-hoc half-broken portability layer with m4,
which is an invitation for troubles. It will certainly won't help me being
portable. If I must write a portable software which uses diverse API such as
inotify, and there's no known portability layer, then I should implement my
own portability layer as a separate project, test it on it's own, and then I
can be safe and avoid any autotools in my main project.
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


[OT] Windows DLL and shared memory (was: Die GNU autotools)

2011-01-12 Thread Shachar Shemesh

On 12/01/11 17:11, Oleg Goldshmidt wrote:


I admit I am not up-to-date (but my recollection refers to years ago, 
too). Windows DLLs were not equivalent to UNIX/Linux shared libraries 
- they contained "relocatable" and not "position-independent" code, 
and each process had its own copy mapped into memory (yes, runtime 
memory).

Not exactly.

It is true that Windows DLLs are relocatable (rather than position 
independent), but that does not necessarily mean that are relocated. 
Each DLL comes with a recommended load address (in virtual memory), and 
if no collision with another DLL happens, that is where it will be 
loaded. If that is the case, Windows will happily share the same 
physical memory for all copies of the text and read only segments of the 
DLL. When compiling a single project, it is fairly easy to make sure 
that all DLLs fit into memory together without being relocated.


Even on the rare occasion that the DLL does need to be relocated, the 
Windows kernel is aware that this page, while faulted, can be discarded 
and later recalculated from the file. As such, it is a candidate to be 
released from memory, despite being "changed" compared to the image on disk.


Shachar

--
Shachar Shemesh
Lingnu Open Source Consulting Ltd.
http://www.lingnu.com


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Oleg Goldshmidt
> fakeroot-ng prefers to use std::unordered_map, if it's available. If not,
> __fnu_cxx::hash_map would do. If even that's not available, it would
> grudgingly use std::map, even though that one has worse performance
> characteristics.
>

Another excellent example, indeed.

I remember guessing that it was probably not considered a big problem on
>> Windows because Windows didn't have shared libraries, so memory would be
>> wasted in any case.
>>
>
> I'm not sure where you got that one. Even assuming you meant "share runtime
> memory", and not "shared code", Windows does have shared libraries.


Eh, this is noise in this thread - sorry for making that comment.

I admit I am not up-to-date (but my recollection refers to years ago, too).
Windows DLLs were not equivalent to UNIX/Linux shared libraries - they
contained "relocatable" and not "position-independent" code, and each
process had its own copy mapped into memory (yes, runtime memory).

Between myself and Shachar, Shachar is much more likely to be up-to-date and
better versed in details, so believe him. My comment was not essential to
this thread, anyway, and can safely be ignored.


> It is true that you are discouraged from using non-system DLLs from a
> shared location, unless you jump through the horrible hoops called
> "manifests", but still, a single application install would be expected to
> use only one version of a DLL, and its text and read only segments would,
> generally, be shared between all executables using it.



-- 
Oleg Goldshmidt | o...@goldshmidt.org
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Shachar Shemesh

On 12/01/11 16:32, Oleg Goldshmidt wrote:
And today I have an #ifdef in some code I wrote a few weeks ago 
because STL's compose1 and compose2 are officially parts of STL but 
not (yet) of the C++ standard, and some interesting platforms have 
them and some don't, and it may depend on the version of g++, caeteris 
paribus.




fakeroot-ng prefers to use std::unordered_map, if it's available. If 
not, __fnu_cxx::hash_map would do. If even that's not available, it 
would grudgingly use std::map, even though that one has worse 
performance characteristics.


The test in its configure.ac is rather largish (not huge, mind you - 50 
lines including the test for what flags are required to actually make 
gcc support C++0x), config.h has the following lines (or similar):

/* The type for the map class */
#define MAP_CLASS std::unordered_map

/* The file to include in order to get the map class */
#define MAP_INCLUDE 
The source file contains no #ifdefs at all. See 
http://fakerootng.svn.sourceforge.net/viewvc/fakerootng/trunk/parent.cpp?revision=313&view=markup#l32 
and 
http://fakerootng.svn.sourceforge.net/viewvc/fakerootng/trunk/parent.cpp?revision=313&view=markup#l54


Still, someone needs to figure out which is the version to use and put 
it in the right place, and it's not going to be the end user.




I remember guessing that it was probably not considered a big problem 
on Windows because Windows didn't have shared libraries, so memory 
would be wasted in any case.


I'm not sure where you got that one. Even assuming you meant "share 
runtime memory", and not "shared code", Windows does have shared 
libraries. It is true that you are discouraged from using non-system 
DLLs from a shared location, unless you jump through the horrible hoops 
called "manifests", but still, a single application install would be 
expected to use only one version of a DLL, and its text and read only 
segments would, generally, be shared between all executables using it.


Shachar

--
Shachar Shemesh
Lingnu Open Source Consulting Ltd.
http://www.lingnu.com


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Oleg Goldshmidt
Disclaimer: I don't use autotools a lot, nor do I particularly like it. I do
see its usefulness though, so I'll support Nadav.

2011/1/12 Elazar Leibovich 

>  Wondering whether your user has Bison or AT&T yacc? Use bison and drop
> yacc support, in a modern system you can expect bison to be supported.
>

What's a modern system? I am not going to check bison specifically - it's
besides the point, and I suppose you'll agree - but I do not expect that
some important target version of AIX or Cygwin will have the same
capabilities as Linux, I do not expect them to have the same (versions of)
compilers or other elements of the toolchain, the same organization of the
filesystem, the same system calls, the same threading model. Nor do I expect
that RHEL5.4 and Fedora 14 will have the same stuff.

And then we are going to have fun with porting to all sorts of embedded
platforms and "appliances" that have some facilities and do not have
others... Not many will have bison (or bash, for that matter).


> But that's my point exactly! When you were in this business, the relevant
> targets were much more scattered and limited than nowadays. It is no longer
> the state, modern system are much more standardized, and those who don't are
> not interesting enough. jwz avoided templates in C++ because they weren't
> widely supported then[*] can you imagine doing that in a modern software?
>

I remember when MSVC barfed on half of templatized things that compiled and
ran on anything that was capable of generating heat. It was not so long ago,
around 2001/2002.

And today I have an #ifdef in some code I wrote a few weeks ago because
STL's compose1 and compose2 are officially parts of STL but not (yet) of the
C++ standard, and some interesting platforms have them and some don't, and
it may depend on the version of g++, caeteris paribus.


>  Would Go also compile Solaris? On AIX? On True64 UNIX (if you happen
>
> to have an Alpha machine)? On various versions of
>> Fedora/Debian/Ubuntu/Gentoo
>
>  from the last 10 years? You can say you don't care, but I do.
>
>
> 1) I think that in modern machines where gcc/gawk can run on, it is very
> likely to compile.
>

"Can run on"? Are you going to tell the user to install gcc and gawk first?
I doubt it would be acceptable.


> 2) I'm not sure that in this case (ie, gcc is running on the machine)
> autotools would make much difference.
>

Right now I am looking at gcc 4.1.2, 4.3.4, and 4.5.1 - they are not the
same.


> Huh? And having #ifdef JAVA5 sprinkled in the code looks like a better
> idea?
>

Hmm... Let's see, Java 5 introduced generics and autoboxing, if memory
serves. Yes, it does sound like a good idea.

And for more sophisticated examples, the concurrency semantics are different
for just about every java version.

The way java solves the problem, by the way, is that just about every
application comes with its own JVM. I recall two applications belonging to a
single suite, from the same vendor, that were supposed to work together, and
each came with its own JVM, and neither worked with the other JVM.

I remember guessing that it was probably not considered a big problem on
Windows because Windows didn't have shared libraries, so memory would be
wasted in any case.

Does that means that this is a good idea in general? Does that mean that if
> I'm writing "hoc" today I'd better of use the autosuite? I really think not.
>

Look, this depends on your requirements. If you stick to writing software
that is OK with certain versions of kernel, gcc, gawk, bison, bash, whatever
as prerequisites, fine.

I am used to requirements like "Your code must compile and run on anything
we may purchase in the future. No, we do not have a complete list of target
platforms at the moment."

Yes, it is possible. Sticking to standards certainly helps. An occasional
#ifdef helps. Even bloody #pragmas may be useful once in a blue moon.
Autotools may certainly be a lifesaver.

-- 
Oleg Goldshmidt | o...@goldshmidt.org
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: asterisk and bezeq

2011-01-12 Thread geoffrey mendelson


On Jan 11, 2011, at 3:49 PM, Erez D wrote:



Did you get any answers ?




Sorry, I never persued it. I'm hoping someone who actually speaks  
Hebrew will. :-)


Geoff

--
Geoffrey S. Mendelson,  N3OWJ/4X1GM
Those who cannot remember the past are condemned to misquote it.









___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Shachar Shemesh

On 12/01/11 15:49, Nadav Har'El wrote:

Actually, the Linux kernel doesn't use autotools;-)

It assumes that the compilation system is a familiar GNU/Linux setup, and
won't work at all on other Unix versions, C compilers which aren't recent
versions of gcc, and so on.
   
I think there's a slightly different reason for that. Autoconf works 
well when most of the customizations required can be automatically 
detected by the configure script. For the Linux kernel, this is not the 
case. The kernel has zero runtime dependencies (and very few compile 
time dependencies), and so no automatic configuration is required at 
all. Instead, it has a huge amount of user configurations required, far 
too much to be passed as "--with" and "--enable". As such, I think it 
justified that it does not go for autoconf, but rather a menu based 
system stored in a text file.


Shachar

--
Shachar Shemesh
Lingnu Open Source Consulting Ltd.
http://www.lingnu.com


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: Die GNU autotools

2011-01-12 Thread Nadav Har'El
On Wed, Jan 12, 2011, Elazar Leibovich wrote about "Re: Die GNU autotools":
> That's exactly my point! Wondering if your user have matherr(3) or gamma(3)?
> Don't use it! In a modern system you'll have alternatives for that.
> Wondering whether your user has Bison or AT&T yacc? Use bison and drop yacc
> support, in a modern system you can expect bison to be supported.

This is easy to say when I gave you as examples things which have been
decided long ago, and today 99% of the systems use the same thing.

But what if you want to support that 1% using some wierd or old system?
And what about newer features which the verdict isn't yet out?

Just as an example, consider optional features like poll/select/epoll,
esd/arts/pulseaudio, inotify/dnotify, ipchains/iptables (I hope you are
familiar with at least some of these), that during the (relatively)
recent past have come and gone in mainstream Linux systems (forget
all the esoteric Unix systems I mentioned). 

You can assume that today all systems have epoll, pulseaudio and inotify.
But what about a system from last year that still doesn't use pulseaudio?
(and this is of course just an example).

> But whatever you do, do NOT set compile time flags in your software. They
> should be in a compatability layer you should use, not in your code. It will
> come back and hunt you.

Looking back at software I have written, many times you simply have no choice.
You can only go so far with the "use the lowest common denomenator" approach.
At some point you want to start using features only available on some systems,
or features that due to bugs or other reasons behave differently on different
systems. Some things you can check in runtime, but some you can't: If you
are compiling on a system that doesn't have inotify, it won't have
#include , and you won't even be able to compile a function
that attempts to use it - so you cannot just try to use it at runtime and
see if it works!

> Listen, autosuite is not a magic powder you mix into your project, and TADA!
> it compiles against all legacy unices. You still need to compile run and
> test the code on this platform.

This is 100% true. But the issue is, once you try to compile on a different
system (be it a different Linux distribution, a different version of a
familar distribution, a type of Unix, etc.) and you discover a compilation
problem or a special fix you need to make, autoconf allows you to write
this fix and make it automatic - so that whenever somebody compiles the
software again and faces the same difference (not necessarily on the same
system), the solution is already there.

> to invest the time in order to make sure it works there. So if there *is* a
> special platform you need your software to work on, your best bet is to
> stick to as standard C as you can, and make the build system dead simple, to
> make compiling it easy on this platform.

Yes, do this and go buy some Intel stocks as well, because you are assuming
they will remain a monopoly.

> But that's my point exactly! When you were in this business, the relevant
> targets were much more scattered and limited than nowadays. It is no longer

This is definitely true, but it doesn't mean there is zero diversity today.
There are still many types of Linux distributions with their own peculiarities,
and you know what - I'm writing this mail on Solaris ;-)

And personally, I'm hoping that in the future, we'll have more diversity
again.

> 2) I'm not sure that in this case (ie, gcc is running on the machine)
> autotools would make much difference.

It did - because even if you were using gcc on these systems (and I often
did), you still had your own system's include files and libraries - most
of them did not come from gcc. Remember that glibc is a separate project,
and it is only available for Linux.

> At least now it doesn't need directory structure to compile. It does need
> two environment variables to compile, but they are the equivalent of a
> './configure --to-ARM' command (ie, impossible to figure out automatically)
> so I'm not sure it's such a bad thing.
> Anyhow, from my personal experience, builiding Go is very easy and
> straightforward.

Thanks, I really need to try it again - about a year has passed since I
last tried it, so maybe things indeed improved.

> Huh? And having #ifdef JAVA5 sprinkled in the code looks like a better idea?

Yes - because this we they could have optionally used Java 5 stuff when it
made things better or quicker, but stick to the common Java 4 in most
places.

> But indeed you must have an autotools like capability with project like the
> kernel. Does that means that this is a good idea in general? Does that mean
> that if I'm writing "hoc" today I'd better of use the autosuite? I really
> think not.

Actually, the Linux kernel doesn't use autotools ;-)

It assumes that the compilation system is a familiar GNU/Linux setup, and
won't work at all on other Unix versions, C compilers which aren't recent
versions of gcc, 

Re: Die GNU autotools

2011-01-12 Thread Elazar Leibovich
For the sake of the brevity, I'll summarize the claims I heard, and reply to
them:

You still didn't bring good enough example, if only the Go project was more
> complex, they would benefit so much from the autosuite/ they have a really
> bad build system now, it causes so many problems.


I won't argue with that. The fact is, that the Go programming language is
relevant and working now on various platforms. I'll let the wise reader to
judge if it's good enough. Remember the philosophy, it has to work OK for
99.9% of the use cases. I just want to assure you I'm not the only one using
go ;-).

Autotools are easy for the client. Just as easy, or easier than any other
> solution, so what's all the fuss?


When everything is working correctly, it doesn't matter which build system
you use. When there are troubles it is extremely hard to debug an autosuite
based project, in contrast to makefile based project. I've had problems with
makefile based build systems before and they were easy to solve. I usually
decided to bypass problems I've had with autosuite based builds.

Now I want to quote Nadav in order to demonstrate the wrong-nowadays (IMHO)
philosophical approach of the autosuite

At that point, I'd been

developing this software for almost a decade, and had run it on more than

a dozen different variants of Unix, and many versions thereof, and I've

accumulated a big ugly Makefile and a big ugly INSTALL instructions, which

you needed to follow to compile it. Were you using ANSI C? Set this flag.

Were you compiling on AIX? Set that flag. Do you have matherr(3)? gamma(3)?

Did your C compiler had a specific bug that was only found on one specific

machine? Were you using Bison or AT&T yacc?


That's exactly my point! Wondering if your user have matherr(3) or gamma(3)?
Don't use it! In a modern system you'll have alternatives for that.
Wondering whether your user has Bison or AT&T yacc? Use bison and drop yacc
support, in a modern system you can expect bison to be supported.
But whatever you do, do NOT set compile time flags in your software. They
should be in a compatability layer you should use, not in your code. It will
come back and hunt you.

Look at the legacy OpenVMS I have in my workplace, we need to support it,
> and it only has the Metaware compiler. Only the autosuite would come to
> rescue in this case.


Listen, autosuite is not a magic powder you mix into your project, and TADA!
it compiles against all legacy unices. You still need to compile run and
test the code on this platform. And nowadays, there is a small amount of
platforms which are relevant to your software, and you're probably not going
to invest the time in order to make sure it works there. So if there *is* a
special platform you need your software to work on, your best bet is to
stick to as standard C as you can, and make the build system dead simple, to
make compiling it easy on this platform.

I've been in this business for 10 years, and I tell you kid, you aint know
> nothing.


But that's my point exactly! When you were in this business, the relevant
targets were much more scattered and limited than nowadays. It is no longer
the state, modern system are much more standardized, and those who don't are
not interesting enough. jwz avoided templates in C++ because they weren't
widely supported then[*] can you imagine doing that in a modern software?

[*] http://www.jwz.org/blog/2009/09/that-duct-tape-silliness/
--
Example specific claims
--

> Would Go also compile Solaris? On AIX? On True64 UNIX (if you happen

to have an Alpha machine)? On various versions of
> Fedora/Debian/Ubuntu/Gentoo

from the last 10 years? You can say you don't care, but I do.


1) I think that in modern machines where gcc/gawk can run on, it is very
likely to compile.
2) I'm not sure that in this case (ie, gcc is running on the machine)
autotools would make much difference.
3) The main reason that Go might not run on this machines, are lack of
testing, and not a bad building system.
4) Even if autosuite will save some time, testing and maintaining the
software on so many systems will cost so much more than the cost of
maintaining the build system. Here is a summary of the problem with porting
Go to Solaris. Automake won't solve any of them.

https://groups.google.com/group/golang-nuts/browse_thread/thread/2f4f9ceb9cd933a4

Go needs a special directory structure/environment settings


At least now it doesn't need directory structure to compile. It does need
two environment variables to compile, but they are the equivalent of a
'./configure --to-ARM' command (ie, impossible to figure out automatically)
so I'm not sure it's such a bad thing.
Anyhow, from my personal experience, builiding Go is very easy and
straightforward.

ANT is bad, because...


I only brought the ANT and the godag examples, as things which makes
building simple things simple, in contrast with autoscan which makes
building simple projects difficult.

Just as an example, consider the Apa