What bothers me is that my manual evaluation is really subjective and I only
skim over the code, the documentation and the issues. I'm unable to test if the
package compiles with the latest version of Nim. I can't evaluate if it's easy
to use, with a high-level Nim API or only a low-level bindin
Nimble could run better checks when releasing a package
[https://github.com/nim-lang/nimble/issues/632](https://github.com/nim-lang/nimble/issues/632)
The Nimble package directory can help with this, see:
[https://nimble.directory/about.html](https://nimble.directory/about.html) If
someone wants to contribute you can contact me on IRC
Wow good job. Some times you just need to brute force things that can't be
automated.
Wow, @spip, you made a great work! As you mention, it will be complex to keep
this up to date, but already having this information as of today is a great
start!
> I plan 2-3 weeks to complete the evaluation of the remaining packages. I'll
> get better stats at the end of the job. Do you think it's worth it? Do you
> have better ideas on how to measure packages quality/maturity?
Yes, it is definitely worth it and the idea to somehow formalize these ideas
I already made this automatically, been some years trying to get this merged:
[https://github.com/nim-lang/packages/pull/916#issue-227422899](https://github.com/nim-lang/packages/pull/916#issue-227422899)
But manual one-by-one package checking is also nice to have, I dunno if that
can scale tho.
In order to add advanced search feature to Nimble, I've been curating the
repository of packages for the last two weeks. I've rated the package in a
scale from 1 (dead code) to 4 (high-quality package)
([https://docs.google.com/spreadsheets/d/1HWy2YumMMcgEDHk34ACauuWR5TYDJTRUVQ6B-LuRXCs/edit?usp