On Mon, 16 Jan 2017 12:00:24 -0200 Gustavo Sverzut Barbieri
<barbi...@gmail.com> said:

> Hi Raster,
> 
> I share the same annoyances with autoshit... and the more I think
> about it, the more I hate it. And I was one of the few that ever
> touched or created .m4 to support our project... still hate the mess
> or "bare shell" + "m4" + make... so many different rules, syntax to
> keep up.
> 
> To clarify some points:
> 
>  - kconfig is NOT a build system per se, it's just managing KEY=VALUE,
> enforcing dependencies (forward and backward, like 'depends on X' and
> 'select X'. At the end it can generate a KEY=VALUE that is suitable to
> be included in a Make-like environment and generate ".h" definition
> file to be used in C/C++. Then you create your own

yeah. i was listing it as it is at least partly a "configure" solution and the
rest is hand done makefiles. well as it's used int he kernel it is :)

> sh/make/ninja/whatever build based on those value. System
> dependencies, such as pkg-config, must be handled elsewhere (shell,
> python, perl...) and written as a Kconfig file that is to be included
> by the main Kconfig, then it won't be user-selected but will still be
> used to enable-disable features.
> 
>  - pure gnu make is great, with very capable macros on its own, you
> can auto generate rules, recursively include files... that leads to a
> single 'make' (not recursive) yet nicely split into individual files
> like the linux kernel. However it may be bit slow on big projects,
> it's showing its age on Linux or Buildroot if you don't have a fast
> enough CPU.

we already depend on gnu make anyway... so nothing new. :) makefiles are
something most c/c++ devs know so it's a "devil you know" with gnu make being
the far more friendly make variety which is decent enough to not have to do
workarounds for.

i actually would be perfectly happy going back to subdir makefiles as you can
just cd to the right place in the tree and "make" from there easily enough... :)

>  - ninja (I'm not an expert here) seems to take a simpler ruleset,
> less dynamic, but executes much faster. I'm using  that as backend to
> cmake... and although the syntax seems to be very simple to write
> manually, it will handle conditionals per se (written explicitly as
> "non-goal") and usually people generate that from other systems (ie:
> cmake). They ship with a python module to generate rules for you based
> on some conditionals. Gaining traction these days, particularly useful
> for big projects like WebKit as backend for CMake.
> 
>  - cmake is nice because it's uniform, they have their own language
> that is shell-like, you just write that and it will generate Make,
> Ninja or whatever. They have "options" that will be displayed in some
> GUI, like "cmake-gui" BUT usually they don't handle inter dependency,
> that's left to be done manually in the cmake file like we do in
> autoconf -- example "if (NOT GLIB_FOUND AND GLIB_ENABLED)
> message(FATAL "Missing Glib")), somthing that is handled perfectly
> with kconfig. It has lots of traction, all distros have some helpers
> to handle that.

cmake is just another very well known option tat is well supported with lots of
docs and examples. like autotools in that way and so that is a big positive.

> - scons is also nice because it's uniform, it's written in Python that
> is a more straightforward language than cmake, also more powerful --
> this is BOTH bad and good. It's kinda old and never got too much
> traction.

also vtorri's link shows it to be incredibly slow. :(

>  - waf/meson/jam and I'd add http://gittup.org/tup/: all look nice,
> but I have not much experience. Not that much used and seems to only
> cover a subset as well. (tup is nice since they can run an
> inotify-based daemon that monitors for changed files... which helps *a
> lot*).

yeah. saw that in vtorri's speed comparsion thng. it seems fast. beyond that i
know very little

> I have lots of experience with autotools (efl, and all other
> projects), cmake (which I did the initial autotools->cmake conversion)
> and kconfig (my recent project soletta uses that). So consider I may
> be biased, but:
> 
> I'd *NOT* go with glue in shell because that usually needs lots of
> complex data structures to simplify the code, otherwise you waste lots
> of time working around poor hash support to do it in shell... Pure
> shell would also not be my favorite tool to write the build system per
> se since tracking parallel tasks with it is more complex than with
> make/ninja or other tools meant for that.

oh no. was thinking pure sh just to replace "configure" and generate a
Makefile.conf that is included from a hand rolled Makefile. basically the
kconfig bit of kconfig with everything needed for running compile tests to
detect something (we can split the tests into a configure_tests dir and just
issue a compile via a wrapper func from the shell and gather return code). so
just replacing functionally what configure and/or kconfig does in a very simple
way.

can kconfig gather pkg-config cflags/libs and versions? can it do feature tests
(does func x exist in library y (or more specifically does a compile succeed
with a func x() in src code if it also #includes x/y/z and -lx -ly -lz -Lx -Ly
-Lz)?). fundamentally these tests in autotools are very easy to DIY on the os's
we support. does kconfig do this or will e have to "roll our own" ?

> TL;DR: the only complete solution is CMake, but if I would pick
> something specifically for EFL use cases I'd go with kconfig + ninja
> with glue in Python -- or Perl (since kconfig uses that) or Lua (since
> efl uses that). Second option is traditional kconfig + gnu-make.
> 
> https://github.com/solettaproject/soletta is an example of kconfig +
> gnu-make + python.
> 
>  - kconfig is embedded into the project as pure C code (not an extenal
> dep) https://github.com/solettaproject/soletta/tree/master/tools/kconfig
> 
>  - https://github.com/solettaproject/soletta/tree/master/tools/build
> base definitions

i am not sure i can quickly get from that if it can do pkg-config checks or
compile checks and so on like above... can it?

>  -
> https://github.com/solettaproject/soletta/blob/master/data/jsons/dependencies.json
> we defined our dependencies file format in JSON, which we parse and execute
> from Python. Could be "YAML" or something like that. But JSON is not that
> cumbersome and it's easy to write a jsonschema to validate it.

now i begin not to like this. write a file that then, goes through a generator
to generate another file that goes through another generator that... it's
autotools all over.

my thoughts on plain sh were just to execute a series of checks and information
gathering into a Makefile.conf (and config.h) then include that makeifle (and
of course config.h). we'd have shell funcs like

#!/bin/sh
./conf/configure_funcs.sh

# pars arguments like --prefix=XXX and set vars accordingly
parse_args

# below will append VAR1 to CHECK_FUNCS var like "VAR1 VAR2 VAR2" so
# a simple for I in $CHECK_FUNCS; do ... can iterate over them when
# write_makefile_conf is called and just write out makefile vars like if that
# var is true/false etc. and will also just generate a simple c file to test
# compile
check_c_func VAR1 funcname inc1.h inc2.h inc3.h -Ixxx -llib1 -Ldir2
# like the above but specifically runs pkg-config --cflags and set VAR2_CFLAGS
# and VAR2_LIBS accordingly
check_pc VAR2 lib1 lib2 lib3
# checks lib1 is at least 1.2.3 version - optional
check_pc_version lib1 1.2.3
# checks lib2 is between 1.2.3 and 2.4.5
check_pc_version lib2 1.2.3 2.4.5

if test x$FEATRUE_X = xyes; then
  check_pc VAR3 libxx libyy
  check_pc_version libxx 1.5.0
  check_pc_version libyy 1.6.0
fi

# and so on...

# at end of shell
write_makefile_conf Makefile.conf
write_header_conf config.h

or you get the idea...

>  -
> https://github.com/solettaproject/soletta/blob/master/data/scripts/dependency-resolver.py
> parses the above json and execute commands such as pkg-config, 'cc' and so
> on. It will output a kconfig that is included by the main kconfig to say
> what's supported (ie: icu, microhttp...)
> 
> 
> 
> On Mon, Jan 16, 2017 at 12:30 AM, Carsten Haitzler <ras...@rasterman.com>
> wrote:
> > I'm going to bring this up as it's highly controversial... and not everyone
> > is going to be happy, but doing NOTHING is worse.
> >
> > I spent a weekend doing LOTS of builds on my Raspberry Pi 3... and I have
> > come to realize some things:
> >
> > 1. Libtool is shit. Seriously. It provides no value for us. If anything is
> > provides vastly negative value. about 50% of all CPU time during make is
> > spent on executing a ~12k line libtool shell script. On a fast enough
> > machine you don't notice easily as the script runs then sleeps then the
> > compiler kicks in then it exists. it's hard to notice. on a pi I literally
> > could watch the libtool shell script think and burn CPU... for about 50% of
> > the time.
> >
> > 2. Just running make chews cPU for multiple seconds (like 30 or so) as it
> > has to parse 55k +of Makefile(s) and figure them out. not I/O time statting
> > stuff. real CPU processing time. Before it even does anything useful.
> > 3. Re-running autogen.sh takes somewhere about the same time as does
> > building the rest of the software.
> > 4. Whenever we do make install these days libtool is friggin' re-linking
> > almost everything. it's horrendous. A make install that should take < 5
> > seconds on a fast intel box takes a minute. on a pi its even worse.
> >
> > A quick back-of-a-napkin math tells me we'd cut our full build times down
> > maybe to 1/4 of what they are now by going with a raw hand-made makefile.
> > For my build set (efl + e + terminology + rage) that'd go from 2hrs to
> > 30mins. It'd drastically improve our productivity when developing. When
> > hunting a bug and "trying things" we have to wait for all that relinking
> > mojo. It's horrible. change anything in eina and it causes eo and eolian to
> > rebuild which causes regeneration of eolian output which causes even more
> > rebuilds... When you're fast cycling to find a bug this is the last thing
> > you want. Technically it's correct as they do depend on each-other but this
> > is something we can deal with on our own and test in the end with a full
> > rebuild. When a full rebuild is 1/4 the time it's far less painful.
> >
> > I think we really need to reconsider our build system. It's a drag on
> > productivity. It's been pissing me off now for a long time long before I
> > got a Pi. It's just an order of magnitude worse on a Pi.
> >
> > So here is our reality:
> >
> > 1. We don't need autotools esoteric OS support. We are a complex enough
> > library set that a new OS requires us to port and support it. So we
> > basically support the following OS's:
> >
> >   * Linux
> >   * FreeBSD
> >   * OpenBSD
> >   * NetBSD
> >   * Darwin/OSX
> >   * Windows
> >   * Solaris/Open Solaris/Open Indiana
> >
> > The Unixen outside the above list
> >
> > That's our reality. Anything else does require a specific port and work and
> > so could happily mean someone has to do build system work. They have to with
> > autotools anyway. So we don't need support other than the above, and any new
> > OS's need explicit support in code anyway so may as well add some
> > detection/customisations in the build system then too.
> >
> > 2. Very few people grok the .m4 stuff in our autotools and very few every
> > will. Our m4 macros are there to make our configure.ac smaller and easier
> > to maintain but combined our m4 + configure.ac blob id 24k lines of shell
> > +m4 mix. configure.ac alone is 6k lines. I am willing to bet we can do a
> > cleaner PURE /bin/sh configure is far far far less that 6k lines total for
> > everything we need.
> >
> > 3. The more time goes on, the more we fight with autofoo and do "weird
> > stuff" like code-generate during build (eolian) and the more we have to
> > work around autotools to get stuff done.
> >
> > 4. A lot of the stuff autotools does to be "technically correct" hurts us
> > rather than helps us.
> >
> > So given that.. what do we do? Options:
> >
> >   * A shell script to replace configure + gen a Makefile.cfg & include from
> > our Makefile(s)
> >   * CMake
> >   * KConfig
> >   * Scons
> >   * Waf
> >   * Meson
> >   * Ninja
> >   * Jam
> >
> > Possibly more. I'm wary of adopting any "fancy build system" unless we truly
> > don't have to fight it and it brings a LOT of value with it.
> >
> > My personally takes of short,listing options are the top 3 above. A hand
> > rolled simple makefile set with just enough shell to detect some things and
> > gather pkg-config stuff etc. followed by the kernel's KConfig setup and
> > CMake.
> >
> > I know that with a hand rolled system we can about relinking, rebuilding
> > all of efl if you touch a line in eina (by just not running makedeps... -
> > you change something in a shared header file that changes memory layout
> > that needs a rebuild then do a rebuild by hand - you should know to do this
> > as a developer).
> >
> > I do not know if cmake will be as nice to us. Kconfig i don't know.
> >
> > I propose that whatever we come up with should support at minimum the
> > following build system "features":
> >
> >   * configure --prefix=XXX
> >   * configure --bindir=XXX
> >   * configure --sysconfdir=XXX
> >   * configure --libdir=XXX
> >   * configure --includedir=XXX
> >   * configure --datadir=XXX
> >   * configure --localedir=XXX
> >   * configure --mandir=XXX
> >   * configure --docdir=XXX
> >   * at least all the relevant configure features we added for efl
> >   * make (from any dir/subdir)
> >   * make install
> >   * make uninstall
> >   * make DESTDIR=xxx
> >   * make dist
> >   * make distcheck
> >   * make check
> >   * cross-compiling (--host=XXX --build=XXX)
> >   * gettext support
> >
> > Let the bikeshedding begin! :)
> >
> > --
> > ------------- Codito, ergo sum - "I code, therefore I am" --------------
> > The Rasterman (Carsten Haitzler)    ras...@rasterman.com
> >
> >
> > ------------------------------------------------------------------------------
> > Developer Access Program for Intel Xeon Phi Processors
> > Access to Intel Xeon Phi processor-based developer platforms.
> > With one year of Intel Parallel Studio XE.
> > Training and support from Colfax.
> > Order your platform today. http://sdm.link/xeonphi
> > _______________________________________________
> > enlightenment-devel mailing list
> > enlightenment-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
> 
> 
> 
> -- 
> Gustavo Sverzut Barbieri
> --------------------------------------
> Mobile: +55 (16) 99354-9890
> 
> ------------------------------------------------------------------------------
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today. http://sdm.link/xeonphi
> _______________________________________________
> enlightenment-devel mailing list
> enlightenment-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
> 


-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler)    ras...@rasterman.com


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to