Excerpts from Peter Simons's message of Sun Aug 07 15:23:47 +0200 2011: > That would be part (2) then. What do you mean by "brute force solving of > dependencies"? How exactly does hack-nix determine which versions can > co-exist? > > Also, it is not obvious to me why this whole procedure requires > end-users to run a special tool. You've hit the key. Because if you upgrade (or force a specific package version) all dependencies can change.
Example: "->" means depends on TARGET -> P1,P2 P1 -> B 2.0 or 1.0 -> C P2 -> B 1.0 -> C If you install P1 B 2.0 will be tried first. Everything is fine. Smae for P2 Let's have a look at what the solver does when trying to build TARGET: It finds TARGET in the pool, then continues resolving P1 first, then P2. When resolving P1 it'll also visit deps finding B-2.0 and C. When trying to resolve P2 it will stop because B-1.0 must not be used because when linking it could cause issues. So it'll try seciond option for dependency B of P2 which is version 1.0. This time everything will pass. Because things get even more complicated and version 2 or 1 may depend uppon ghc version it is impossible to have on cabal-packages.nix only. You may need cabal-packages-for-ghc-6.10.nix cabal-packages-for-ghc-6.11.nix, ... the nightmare begins. Even worse if you want to support packages with newer and older HaXML .. That's why I don't try to generate on set of .nix files. Patching cabal to dump .nix expression was my first attempt. But I gave up on it soon. Instead this all is done on the fly as needed. Thus in the example above you'll get two versions of P1 depending on whether B 2.0 may be used or not without doing any manual work but including B 1.0 and B 2.0 in the pool. That's why you should run hack-nix for each TARGET once. Because the set of dependencies may be different anyway. I agree that it most cases you can get along with having one set of packages (cabal-packages.nix demonstrates it). But as soon as you want to escape you have to start from scratch. And that's when hack-nix starts paying you back each minute you've spend on it. > Why don't you generate plain Nix expressions that can be checked into nixpkgs? So the short answer is: Because in the worst case you would have to create cabal-packages.nix for each target package and maintain them all because one needs newer bytestring, but the other don't compile with that newer one. The interesting point to note is than nixpkgs is built around the assumption that there is one set of packages you can build almost everything with it. Eelco Dolstra told me on irc once that there is no dependency solver by design. And I have to agree that it works pretty good in almost all cases. More importantly: You'll get deterministic results. If hack-nix was able to cope with the huge solution space you'd get the same darcs hash by accident only. If you require workarounds you can do so by overwriting dependencies deeply which has been done for webkit in the past. Today "applyGlobalOverrides" seems to be the tool to use. > they would be able to build and install Haskell packages using the standard > nix-env > interface, which seems to be very desirable. You can! Have a look at the [default.nix] in the overlay file and look for xmonad,darcs,hackNix itself, .. The function exeByName which is used will calculate different dependencies for each (which may look like a waste of resources - but it was the only way I came up supporting everything possible.) > to use a separate tool to install Haskell packages. Besides, such a tool > already exists, namely cabal-install. Yes - but it does not know about Nix. And if dependencies are wrong on hackage you're doomed. And that happened too often in the past to ignore. Plus when I wrote it gtk2hs was not cabalized. Thus hack-nix was able to workaround it, cabal was not. > Am I missing something? No, you've done a great job identifying the missing pieces helping you understand the whole thing :) > So when a package cannot be built the very latest version of its > dependencies, I'll have to choose the version that ought be used > manually. Is that right? Correct. You add it to ~/.hack-nix/config and run hack-nix to regenerate the package pool. Then you should be fine (hopefully). In the example above depending on B-1.0 and B-2.0 is considered fine if B is used in executable statements only (thus don't propagate to TARGET when linking) I also agree that we should think about adding the haskell platform versions (instead of latest?) to the pool because packages are more likely to build .. However if you add too many versions to the pool brute force will kill Nix. So the perfect replacement would be doing it the way Eclipse Equinox does: use logical solvers which identify bad paths early visiting them once only improving speed. I also had long discussions with Andres Loeh who thinks the design is not perfect. AFAIK he didn't came up with something better which can be implemented in reasonable time. Side note: For python and ruby overlays I didn't implement brute force solving. Instead if version mismatches are found an error message is given and you can force a version manually which is a compromise. For Python there are additional issues such as you having to evaluate the description .py files to know about dependencies. So a lot of dependencies have to be added manually. Ruby is stable, Python is still experimental. In the end you also have to think about whether you want to download 50.000 package descriptions (ruby case) if you only want to build 20 out of them. I don't have a solution to this problem which applies to ruby, python, haskel, cepan (perl), .. Marc Weber _______________________________________________ nix-dev mailing list [email protected] https://mail.cs.uu.nl/mailman/listinfo/nix-dev
