On Sun, 15 Jan 2017 19:54:37 -0700 Michael Jennings <m...@eterm.org> said:

> > Let the bikeshedding begin! :)
> 
> Well, since you asked....  ;-)
> 
> Bill Joy (creator of "vi") once said, "Vi was created for a world that
> no longer exists."  I think it's safe to say that the same is true of
> autotools.  Most commercial UNIX flavors are dead, or may as well be,
> at least for E's purposes.  Plus, FreeBSD offers Linux compatibility,
> and the most-used devices are mobile/embedded now anyway.  The days
> for which autoconf and libtool were created are past us.

correct. that's one major justification for "the value autotools once provides
no longer is enough to justify it's downsides". :)

> I think Git proves that Kconfig can be both powerful and portable in
> the modern age, so I tend to recommend it nowadays.  CMake is also a
> solid option.  I tend to advise against the custom home-grown shell
> script since it's going to involve a lot of wheel re-invention, and
> Kconfig/CMake will almost certainly out-perform it...but it will still
> beat auto*/libtool.

realistically though... we need very little re-invented. we need:

1. gathering of pkg-config output which is as simple as VAR=`pkg-config x
--libs' or --cflags ... or --modversion
2. some versioning number comparison function
3. something that can --enable/--disable something (set a var or not) or
--with-xxx=XXX (set var to a string)
4. something that can generate a config.h from some set of vars
5. something to compile a piece of code to test if it compiles and set vars
based on he result

i am willing to bet the re-invention of these very limited things and maybe 2
or 3 others i missed along with the usage of them in such a shell script would
be far less complexity and size than our current configure.ac alone, let alone
the m4 macros we also have as well.

so it'd be a win over what we have regardless.

would cmake be less? how much will we fight cmake? i don't know.

> Just keep in mind that VTorri has said he will vamoose if the project
> abandons autotools, so if he still feels that way, that should factor
> into the decision....
> 
> Michael
> 
> 
> 
> On Sun, Jan 15, 2017 at 7:30 PM, Carsten Haitzler <ras...@rasterman.com>
> wrote:
> 
> > I'm going to bring this up as it's highly controversial... and not everyone
> > is going to be happy, but doing NOTHING is worse.
> >
> > I spent a weekend doing LOTS of builds on my Raspberry Pi 3... and I have
> > come to realize some things:
> >
> > 1. Libtool is shit. Seriously. It provides no value for us. If anything is
> > provides vastly negative value. about 50% of all CPU time during make is
> > spent on executing a ~12k line libtool shell script. On a fast enough
> > machine you don't notice easily as the script runs then sleeps then the
> > compiler kicks in then it exists. it's hard to notice. on a pi I literally
> > could watch the libtool shell script think and burn CPU... for about 50% of
> > the time.
> >
> > 2. Just running make chews cPU for multiple seconds (like 30 or so) as it
> > has to parse 55k +of Makefile(s) and figure them out. not I/O time statting
> > stuff. real CPU processing time. Before it even does anything useful.
> > 3. Re-running autogen.sh takes somewhere about the same time as does
> > building the rest of the software.
> > 4. Whenever we do make install these days libtool is friggin' re-linking
> > almost everything. it's horrendous. A make install that should take < 5
> > seconds on a fast intel box takes a minute. on a pi its even worse.
> >
> > A quick back-of-a-napkin math tells me we'd cut our full build times down
> > maybe to 1/4 of what they are now by going with a raw hand-made makefile.
> > For my build set (efl + e + terminology + rage) that'd go from 2hrs to
> > 30mins. It'd drastically improve our productivity when developing. When
> > hunting a bug and "trying things" we have to wait for all that relinking
> > mojo. It's horrible. change anything in eina and it causes eo and eolian to
> > rebuild which causes regeneration of eolian output which causes even more
> > rebuilds... When you're fast cycling to find a bug this is the last thing
> > you want. Technically it's correct as they do depend on each-other but this
> > is something we can deal with on our own and test in the end with a full
> > rebuild. When a full rebuild is 1/4 the time it's far less painful.
> >
> > I think we really need to reconsider our build system. It's a drag on
> > productivity. It's been pissing me off now for a long time long before I
> > got a Pi. It's just an order of magnitude worse on a Pi.
> >
> > So here is our reality:
> >
> > 1. We don't need autotools esoteric OS support. We are a complex enough
> > library set that a new OS requires us to port and support it. So we
> > basically support the following OS's:
> >
> >   * Linux
> >   * FreeBSD
> >   * OpenBSD
> >   * NetBSD
> >   * Darwin/OSX
> >   * Windows
> >   * Solaris/Open Solaris/Open Indiana
> >
> > The Unixen outside the above list
> >
> > That's our reality. Anything else does require a specific port and work and
> > so could happily mean someone has to do build system work. They have to with
> > autotools anyway. So we don't need support other than the above, and any new
> > OS's need explicit support in code anyway so may as well add some
> > detection/customisations in the build system then too.
> >
> > 2. Very few people grok the .m4 stuff in our autotools and very few every
> > will. Our m4 macros are there to make our configure.ac smaller and easier
> > to maintain but combined our m4 + configure.ac blob id 24k lines of shell
> > +m4 mix. configure.ac alone is 6k lines. I am willing to bet we can do a
> > cleaner PURE /bin/sh configure is far far far less that 6k lines total for
> > everything we need.
> >
> > 3. The more time goes on, the more we fight with autofoo and do "weird
> > stuff" like code-generate during build (eolian) and the more we have to
> > work around autotools to get stuff done.
> >
> > 4. A lot of the stuff autotools does to be "technically correct" hurts us
> > rather than helps us.
> >
> > So given that.. what do we do? Options:
> >
> >   * A shell script to replace configure + gen a Makefile.cfg & include from
> > our Makefile(s)
> >   * CMake
> >   * KConfig
> >   * Scons
> >   * Waf
> >   * Meson
> >   * Ninja
> >   * Jam
> >
> > Possibly more. I'm wary of adopting any "fancy build system" unless we truly
> > don't have to fight it and it brings a LOT of value with it.
> >
> > My personally takes of short,listing options are the top 3 above. A hand
> > rolled simple makefile set with just enough shell to detect some things and
> > gather pkg-config stuff etc. followed by the kernel's KConfig setup and
> > CMake.
> >
> > I know that with a hand rolled system we can about relinking, rebuilding
> > all of efl if you touch a line in eina (by just not running makedeps... -
> > you change something in a shared header file that changes memory layout
> > that needs a rebuild then do a rebuild by hand - you should know to do this
> > as a developer).
> >
> > I do not know if cmake will be as nice to us. Kconfig i don't know.
> >
> > I propose that whatever we come up with should support at minimum the
> > following build system "features":
> >
> >   * configure --prefix=XXX
> >   * configure --bindir=XXX
> >   * configure --sysconfdir=XXX
> >   * configure --libdir=XXX
> >   * configure --includedir=XXX
> >   * configure --datadir=XXX
> >   * configure --localedir=XXX
> >   * configure --mandir=XXX
> >   * configure --docdir=XXX
> >   * at least all the relevant configure features we added for efl
> >   * make (from any dir/subdir)
> >   * make install
> >   * make uninstall
> >   * make DESTDIR=xxx
> >   * make dist
> >   * make distcheck
> >   * make check
> >   * cross-compiling (--host=XXX --build=XXX)
> >   * gettext support
> >
> > Let the bikeshedding begin! :)
> >
> > --
> > ------------- Codito, ergo sum - "I code, therefore I am" --------------
> > The Rasterman (Carsten Haitzler)    ras...@rasterman.com
> >
> >
> > ------------------------------------------------------------------------------
> > Developer Access Program for Intel Xeon Phi Processors
> > Access to Intel Xeon Phi processor-based developer platforms.
> > With one year of Intel Parallel Studio XE.
> > Training and support from Colfax.
> > Order your platform today. http://sdm.link/xeonphi
> > _______________________________________________
> > enlightenment-devel mailing list
> > enlightenment-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
> 
> 
> 
> -- 
> Michael Jennings (KainX)   https://medium.com/@mej0/    <m...@eterm.org>
> Linux/HPC Systems Engineer, LANL.gov      Author, Eterm (www.eterm.org)
> -----------------------------------------------------------------------
>  "The trouble with doing something right the first time is that nobody
>   appreciates how difficult it was."                      -- Walt West
> 
> ------------------------------------------------------------------------------
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today. http://sdm.link/xeonphi
> _______________________________________________
> enlightenment-devel mailing list
> enlightenment-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
> 


-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler)    ras...@rasterman.com


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to