I’ve another idea of measurement: What software is for?
Is software first of all for developers, or users? For people who write it, or people who run it? I mean, software is made to be used right? not only read (and actually, continuously changing software also is an issue for those who read, then you’d need to reread it several times to stay updated… what you’ve read would loose value as it was obsoleted, etc.) I can see how we can say that “swift” is “more alive” than bash, but it is arguable to say something that constantly changes, at the point of being barely usable for anything else than a unimportant scripting language or even only a command-language (swift), is in better health, if bash’s stability allows to build ever growing stuff with it. Aliveness doesn’t equate health. A mix of meat and shit full of moisture, parasites and any insect is very alive, but it is not healthy. Health require size, reliability, and hence stability. You’re absolutely not considering that. So software is not made to do releases or patches, it’s made to be used. According your definition guile scheme is more alive than emacs lisp, which is more than C, which is more than common lisp, which is even more alive than TeX, the then deadest of all languages… Yet TeX is terrificly old, used in a lot of ways, keeps growing, and is the official presentation language of GNU software (via one of its dialect: texinfo). And many believe its stability is a good things. Bugs are not that much a problem as not only they were a few from the beginning, and the software was simple, but it wasn’t given unreasonable powers such as accessing the network or *modifying* files. Even more so as it’s only a scripting language (purely interpreted), never a (be it natively or byte-code) compiled/VM one, so most of time we can read source. And simple copyleft allows it to be easily free. And wheter to obfuscation… most of programs are written by mathematicians who have no idea of how to make it readable anyway x) and programs in TeX are notoriously hard to read (so it’s more a social deficiency becoming technical instead). so instead… why not measure *actual usage*? Yes this is hard, but can be done in some reasonable way: measuring packaging, and measuring dependencies. popcon also can be used. We could count how many are packaged into debian, and ponder that by how many dependencies of them are. We could also nuance that by the proportion of bugs. Then even an old and untouched software (inetutils, coreutils, bash, etc.) could show pretty good health. Not talking of TeX and its thousands, millions packages (who’d need to be counted separately), emacs lisp would humbly be less than common lisp, yet be seeable as pretty active, same for guile, etc. Popcon would talk of it even more. The ideal metric being going out and asking random people whether they used today some software, or any dependency of it (so barely asking what software did they use). Debian doesn’t do a such metric, it would be pretty invading as is (maybe with some anonymification, decentralized adding up, etc.?). This method also encompass the fact to give more importance to core infrastructure project such as glibc, gcc, etc. on which almost all free operating systems depends nowadays (even competitors such as llvm were bootstraped using these, so we could count bootstrapping as well). The fact at some point LLVM hackers considered to merge with gcc (but rms wasn’t aware by then :/) also shows to be as a sign of health for the time it happened. Forks are examples of usual increase of development, I guess they happen both after times where development increase (so disagreements, or at least difference of developing pace, become more likely), and before moments of enthusiasm leading to even more contributions… yet I don’t see them as healthy in any way because this increase of effort is actually an increase in duplicated effort. Merge are pretty much the opposite (except for some enthusiasm as well: enthusiasm of change).