Hello Kent,

Friday, August 8, 2014, 9:29:54 PM, you wrote:

But it's possible to fix many problems even now!

What would you tell if something VERY simple is implemented like - reporting 
every emerge failed due to slot conflict back home with details for inspection?

If maintainers had that kind of data then they could learn from the wild. I 
don't know what 
they would learn but I know it would be a very useful experience that might 
jump start 
evolution - useful updates to emerge and other tools. Almost every system 
designed by nature has 
feedback functions. It's the safe update - it will work even if not optimal 
from the start
or even if it's not clear what it will help to learn. The quality of ebuilds 
would improve too.

And from the useful life database new tools could evolve like - bug reporting 
automatization a 
whole new world of tool.

http://db.gentoo.org/report/

     System: System name 
       Arch: 
Package emerged: ....
Environment: ....
Dependency graph: .... -> ... -> ...
Fail message:

* 3 reports per day are accepted from one single IP
* no dups 

http://db.gentoo.org/stats/

- SlickGrid stats

Arch Package How many times Failed Fail message 

Click on Package -> Dependency Graph 

If GENTOO had everything emerging from any state (goal unattainable but 
desirable) that 
would be a great advantage for the users. That feel of a lean mean machine that 
saves time - it's 
tasty - new fans warranted.



On 9 August 2014 04:58, Igor <lanthrus...@gmail.com> wrote:
Maintainers have no feedback from their ebuilds, they all do their best but 
there are no tools 
to formalize their work. No compass. They have no access to user 
space where the packages are installed, unaware how users are using their 
ebuilds. It's the design 
failure that hunts Gentoo from the start - no global intellectual bug tracking 
system. Doing not mistakes
- not possible, the automated tracking sub-systems should be there but... we 
are where we are. 
Some of that is doable, ie: we could have installation metrics systems like 
CPAN has a testers network with a matrix showing where a given thing is failing 
: http://matrix.cpantesters.org/?dist=CPAN-Meta-Requirements%202.126

But its a lot of work investment to support.

And beyond "it installs" and "its tests pass", its piratically infeasible to 
track software failing beyond there.

And some of the reasons we have dependency declarations are to avoid problems 
that will ONLY be seen at runtime and WONT be seen during installation or 
testing. ( Usually because the problem was found before there were tests for it 
)

For that, only manual feedback systems, such as our present bugzilla, are 
adequate. 


-- 
Kent 

KENTNL - https://metacpan.org/author/KENTNL





-- 
Best regards,
 Igor                            mailto:lanthrus...@gmail.com

Reply via email to