[Nix-commits] [NixOS/nixpkgs] 6b4a41: libfaketime: make the build reproducible

2016-12-20 Thread Alexander Kjeldaas
  Branch: refs/heads/master
  Home:   https://github.com/NixOS/nixpkgs
  Commit: 6b4a41a360efe14e6cb632d024467b19e991813c
  
https://github.com/NixOS/nixpkgs/commit/6b4a41a360efe14e6cb632d024467b19e991813c
  Author: Alexander Kjeldaas <a...@formalprivacy.com>
  Date:   2016-12-20 (Tue, 20 Dec 2016)

  Changed paths:
M pkgs/development/libraries/libfaketime/default.nix
A pkgs/development/libraries/libfaketime/no-date-in-gzip-man-page.patch

  Log Message:
  ---
  libfaketime: make the build reproducible

A rebased version of 
https://github.com/NixOS/nixpkgs/pull/2281/commits/cb8bd05a0172db9c02ba0067b21bbc0ce17cc522
Note: we no longer apply the spurious lrt patch.

This allows `nix-build --check -A libfaketime` to succeed.


___
nix-commits mailing list
nix-comm...@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-commits


[Nix-commits] [NixOS/nixpkgs] 4c99d2: kernel: set nx bit on module ro segments

2016-06-03 Thread Alexander Kjeldaas
  Branch: refs/heads/master
  Home:   https://github.com/NixOS/nixpkgs
  Commit: 4c99d22f19d329fe102d89c838134d75f1bf35a2
  
https://github.com/NixOS/nixpkgs/commit/4c99d22f19d329fe102d89c838134d75f1bf35a2
  Author: Alexander Kjeldaas <a...@formalprivacy.com>
  Date:   2016-06-03 (Fri, 03 Jun 2016)

  Changed paths:
M pkgs/os-specific/linux/kernel/common-config.nix

  Log Message:
  ---
  kernel: set nx bit on module ro segments

Fixes #4757.


___
nix-commits mailing list
nix-comm...@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-commits


Re: [Nix-dev] Sidestepping the community builds trust issue?

2015-12-26 Thread Alexander Kjeldaas
On Sat, Dec 26, 2015 at 10:25 AM, Michael Raskin <7c6f4...@mail.ru> wrote:

> >If web-of-trust is the best solution, and the only blocker is build
> >reproducability, how about trying to classify build differences?
> >
> >Each of the differences will have a reason, and either we can fix the
> build
> >to be deterministic (e.g. timestamps, build paths), or we can classify a
> >class of changes as equivalent (e.g. optimalizations resulting in
> >equivalent code, prelinking).
>
> Do we want to do something about Profile Guided Optimisation, for
> example? I think GCC builds itself with PGO after bootstrapping, and
> I don't know what other packages use some amount of unreproducible PGO.
>
>
PGO is in theory reproducible, it just has another input which is the
profile data.  The question is whether it is possible to attack an
otherwise trusted build using fake profile input.

If the profile input is not a usable attack vector, then all that is needed
is consensus on which input to use for a PGO compilation.  This is easier
than the trust issue.

Alexander
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Funding Hydra Development

2015-01-22 Thread Alexander Kjeldaas
On Thu, Jan 22, 2015 at 1:52 PM, Vladimír Čunát vcu...@gmail.com wrote:

 This thing is about trust, and personally I'd prefer signing the
 derivation-output hash pairs and having some web-of-trust-like solution.
 (Although some build redundancy is certainly good, for multiple reasons.)


Absolutely.


 The problem with seti@home -like solutions is that verifying correctness
 is generally no cheaper than full rebuild.


Yes, you need some rebuilds.  In a large network that should not be a
problem IMO.  Any distributed system with redundancy needs to do redundant
work.


 Therefore, the untrusted computers bring very little added value. (They
 can distribute the content signed by trusted people, but distribution isn't
 much of a problem in our case, IMHO.)


I don't understand how this follow from the previous point.  Yes the
untrusted computer needs to be associated with a crypto key so there is
some consequence of it lying.  That's better.  However, a completely
untrusted computer can still be used to generate contested outputs (look
for signed outputs that are lying).  A contested output is valuable as
something that many people can try building so we can figure out how to
react to the person that signed the output (was it a flaky build,
non-determinism, attack).

Thus a normal NixOS (unknown, untrusted computer) can still recompile some
random package that is being installed in order to strengthen trust in the
official builds.


 On 01/22/2015 01:29 PM, Wout Mertens wrote:

 Then you could do something like, have 1000 builders, and if 501
 builders get the same output hash for a derivation, it gets accepted on
 the public ledger of input/output hashes.


 I'm not sure about such schemes either. It isn't very economical to build
 everything 1000-times. I do see the bitcoin-like inspiration (I guess), but
 I wouldn't apply it here, at least not in this way.
 (Do we want to give most decision power to those who make most claims on
 the build results? Even if we extend them with some additional
 proof-of-work?)


Comparing to bitcoin or consensus protocols is not correct.  In bitcoin or
consensus protocols we try to agree on *an order of events*.  Therefore we
need a majority to agree.  The problem we have here is agreement on a
mapping a set of inputs to a set of outputs.  This can be done fully in
parallel and only a single proof of cheating is enough to prove that
someone is dishonest and do whatever is needed to kick them out.

Alexander
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Funding Hydra Development

2015-01-21 Thread Alexander Kjeldaas
On Wed, Jan 21, 2015 at 10:34 PM, Vladimír Čunát vcu...@gmail.com wrote:

 On 01/21/2015 10:32 PM, Wout Mertens wrote:

 Not sure if throwing money at the Hydra codebase will speed up compiles
 (apart from setting it up to use ccache).


 I understood that rather as having more build power at Hydra.nixos.org


It would be useful to have a ballpark figure wrt what is needed.

A good start would be to update https://nixos.org/wiki/Hydra with the specs
of the current machines, and list them all.  I see builds on
rackspace-[1-4] which are not listed for example.

Alexander
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Request for comments: pinky-promise determinism

2015-01-03 Thread Alexander Kjeldaas
I don't like to use the git hash, as it is SHA1 and not fit for use wrt
integrity.  I think that would be a regression wrt the complex tricks that
Georges referred to.

Would it be possible to create a git wrapper that does the
make_deterministic_repo step in a transparent manner for cargo and have
this git be in the build environment?

Alexander

On Fri, Jan 2, 2015 at 9:07 PM, Wout Mertens wout.mert...@gmail.com wrote:

 Another use-case: providing the same input hash, based only on version,
 for gcc and cross-gcc on another platform. Ditto for ccache and distcc.

 On Fri, Jan 2, 2015, 14:56 Shea Levy s...@shealevy.com wrote:

 For dirty dirty hacks, you can set __noChroot = true and get access to
 the network.

 On Jan 2, 2015, at 1:09 PM, Georges Dubus georges.du...@gmail.com
 wrote:

 Hello everyone

 I would like to propose compromise in the purity rules of
 non-fixed-output derivations, and hear what you think about it.

 # Rationale

 There are a few situations where derivations play the role of
 fixed-output derivation, but the hash of their output is not fixed. Some
 examples:
 - fetchgit derivations when the .git must be kept. The .git directory is
 incredibly hard to make deterministic, as this require tweaking with
 implementation details: purging any commit that might have been downloaded
 from the server, that may have no link with the reference we are using.
 - cargo, the package manager for the rust language, uses git to download
 its database, and to check it is up-to-date. The same problem as with
 fetchgit arise, with the increased trouble that we are now tweaking the
 implementation detail of an implementation detail.

 However, we can trust that, even though the .git is not binary identical
 in each situation, the result of the git commands we could use in the
 packaging task is always the same.

 # Proposition

 I propose a new kind of derivation that would be identical to the current
 non-fixed-output derivation, but without any restriction on its access to
 the outside world.

 The documentation should state that this kind of derivation is dangerous,
 and should only be used when a trustworthy tool is used (since the tool is
 trusted to be deterministic in its behaviour).

 This new derivation could be used for dirty hacks, but this should be
 discouraged by the documentation, and never accepted inside nixpkgs.

 # Conclusion

 The inclusion of this new kind of derivation would allow a satisfying
 implementation of leaveDotGit for fetchgit, one that does not rely on
 complex tricks[1], and allow me to implement cargo support without relying
 on non-future-proof internals tweaking.

 However, this would be at the cost of including a new kind of derivation
 that is much less satisfying, and that could, if misused, come back to bite
 us.


 I'd love to hear what you think about it.


 [1]
 https://github.com/NixOS/nixpkgs/blob/master/pkgs/build-support/fetchgit/nix-prefetch-git#L198


 --
 Georges Dubus
  ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] less: When assumptions ruin the world

2015-01-03 Thread Alexander Kjeldaas
Just a note for those who are annoyed that less will clear the screen on
quit.

export LESS=-X

On Fri, Jan 2, 2015 at 4:30 PM, Ertugrul Söylemez ert...@gmx.de wrote:

 Hi Eelco,

  There is a very good reason for this principle.  If a program does
  more than what it's intended to do, then it hurts composability.
 
  There shouldn't be an issue with composability here, because Nix will
  only run the pager when stdout is a terminal. So things work fine if
  you pipe Nix into another command.

 Let me give you an example where this assumption fails:  Listing the
 current generations from your shell profile.  In fact something very
 similar happened to me, which motivated me to start this thread:  The
 change broke my assumption that Nix can be used safely from a shell
 script.

 Nothing bad happened, but one day was wasted, because a script stopped
 for `less` without my knowledge.

 I really believe that projects should start as non-interactive script-
 and command-line-friendly programs /by default/.  I'd go as far as to
 call this a good design principle.  Frontends can always be made.


 Greets,
 Ertugrul
 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev

___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] How to get rid of systemd (was: Modifying the init system (introducing S6 supervision suite))

2014-12-27 Thread Alexander Kjeldaas
I'll just jump in and say that if you is going to donate time to do this,
Ertugrul, then I'm all for it.

It seems like it is possible to make the service system a lot leaner,
smaller, and nicer.  This is especially true when services are in
containers, or NixOS is used in docker containers.

For those that don't really get what Ertugrul describes, looking at what is
called orchestration systems for docker gives a sense of what I think he
is talking about. An orchestration system is a way of defining how multiple
docker containers are connected to each other.

The orchestration system itself doesn't necessarily need to keep the
state of the system, because the cloud system it operates on can just spin
up new instances, or the cloud system has a service discovery mechanism, or
which service is active is handled by a load balancer.

Similarly, on a host, when we have cgroups/containers/namespaces, we can
spin up a new service in a container and redirect traffic to it by fiddling
with namespaces, routing settings etc.  We can make

I am mostly interested in seeing how the monoid

On Sat, Dec 27, 2014 at 6:53 PM, Ertugrul Söylemez ert...@gmx.de wrote:

 Hi there Tobias,

  One thing most of us seem to agree about [...]
 
  Maybe that's true (I don't see a consensus on the list, only the usual
  sparse FUD. I don't follow IRC) but arguments that start this way
  always give me the willies.

 I understand why, but I'm not very good at English rhetoric.  Please
 don't let that overshadow the rest of the mail.


  [...] is that conceptually NixOS does not depend as much on something
  like systemd as most other distributions do.
 
  Something like systemd? Nix doesn't magically make *nix less *nix,
  for better or worse. The meat of systemd is still service management,
  which NixOS needs as much as any distribution.

 NixOS needs service *management*, but it doesn't necessarily need a
 service *manager*.  In traditional distributions services are managed by
 destructive update (enable/disable).  Normally we don't have that.


  Switching to it was a huge step back from the ideals of NixOS,
  because it represents all the traditional views on what a system
  should look like: a giant pile of interconnected mutable variables.
 
  That's what a running system is, though. Immutability *is* just for
  file systems. You plant a pretty tree, and hope it describes a sane
  system that will do the right thing most of the time. Then, something
  will happen that will kick all your pretty algebraic assumptions off
  the table very quickly. Most of your OS is in buggy C, whether you
  like it or not...

 The assumptions are not that the programs will do what I expect them to
 do, but rather that two instances of nginx each with one virtual host
 is equivalent one instance of nginx with two virtual hosts.  This does
 not assume anything about the correctness of nginx itself, but it does
 help building systems, containers and networks componentwise.


  You seem to have some very nice ideas about describing services (or
  something better), if still very rough around the edges. But if you
  can't implement your ideas on top of systemd, so much for
  init-agnosticism. And probably conceptual purity, too.

 This may be a misunderstanding.  Init-agnosticism means that the
 complexity of translating this concept to an actual set of services that
 systemd can work with is the responsibility of a single
 function/script/whatever.  We achieve separation of concerns.

 Should we, at any point, decide to turn our back on systemd or even just
 provide the option to use another manager, then all we need to do is to
 reimplement this one function/script/whatever.


We can use a monoid system to construct configurations, but the socket
activation standard for example, is centered on optimizing the activation
script itself.  What are your thoughts on the activation script?

I can easily see that using systemd might be overkill and way too complex
for a container-based system, so I think there is something to research
here.

I also think upgrading services doesn't really seem to work in systemd or
in the current setup.  Similarly to how, when using a distributed docker
setup, we have a load-balancer that atomically switches instances, we
should not need to take down the old instance service before the new one is
created.

Rather, the upgraded service should be started in isolation (using
containers), and after ensuring that it has started, is working etc, then
should the switch happen using namespaces, routing entries etc.  This
should be the preferred way to deal with non-transactional services (i.e.
non-database stuff).

The idea that the old service must be stopped before the new one is started
is based on what I think is a conflation of concerns, namely treating the
whole service state as global state.  Instead a lot of services can be
treated as a sequence of isolated containers, and a small set of
load-balanced, mutable service entry 

Re: [Nix-dev] Enable openntpd instead ntp by default

2014-12-26 Thread Alexander Kjeldaas
One data point: systemd-timesync is approx 1.5k lines, openntpd is appros
5k lines (according to theo), ntpd approx 100k lines.

Alexander

On Fri, Dec 26, 2014 at 4:26 AM, Roger Qiu roger@polycademy.com wrote:

  What are the advantages and disadvantages?


 On 24/12/2014 9:12 PM, Raahul Kumar wrote:

  I would strongly prefer systemd-timesync, as the default, since we're
 using systemd anyway. Might as well get the maximum use out of it.

  Aloha,
  RK.

 On Tue, Dec 23, 2014 at 10:38 PM, Eelco Dolstra 
 eelco.dols...@logicblox.com wrote:

 Hi,

 On 21/12/14 21:32, Paul Colomiets wrote:

  I'm not sure was it discussed before, but I want to ask if we should
  enable openntpd instead of ntpd by default?

 +1 on switching to openntpd or systemd-timesyncd (with a preference for
 the
 latter for better integration with the rest of the system, such as
 automatically
 handling network reconfiguration events from networkd).

 --
 Eelco Dolstra | LogicBlox, Inc. | http://nixos.org/~eelco/
  ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev




 ___
 nix-dev mailing 
 listnix-...@lists.science.uu.nlhttp://lists.science.uu.nl/mailman/listinfo/nix-dev


 --
 Founder of Matrix AIhttp://matrix.ai/+61420925975


 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Avoiding threads in the daemon

2014-12-19 Thread Alexander Kjeldaas
On Fri, Dec 19, 2014 at 7:20 PM, Eelco Dolstra eelco.dols...@logicblox.com
wrote:

 Hi,

 On 18/12/14 17:32, Ludovic Courtès wrote:

  Thus, I think Nix commit 49fe95 (which introduces monitor-fd.hh, which
  uses std::thread just for convenience) should be reverted, along with
  the subsequent commits to that file; then commit 524f89 can be reverted.

 I really don't want to get rid of threads because they're useful and I
 want to
 use them more in the future (e.g. build.cc would be much simpler if it used
 threads rather than the current event-driven approach; nix-daemon could
 handle
 client connections with a thread rather than a process; etc.).

 I see a few ways to get PID namespaces back:

 * Do a regular fork followed by clone(... | CLONE_NEWPID | CLONE_PARENT)
 (after
 which the intermediate process can exit).

 * Call setuid/setgid via syscall() to bypass the locking in the Glibc
 wrappers.
 However, there might be other problematic functions so this is not a great
 solution.

 * Get the Glibc folks to provide a way to run at-fork handlers with
 clone().

 Clearly the first option is the easiest.


There is a 4th solution - use a fork()-service. What you do is you fork() a
specific child, the fork()-service, when you start your process, before any
mutex is held and while you control the full state of your program.  You
then communicate with this service using pipes, or your favorite IPC
mechanism.  The fork()-service never starts any thread, and can safely fork
off any child you need.

Alexander
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Proposal: Standard installation procedure

2014-10-21 Thread Alexander Kjeldaas
On Mon, Oct 20, 2014 at 9:50 AM, Eelco Dolstra eelco.dols...@logicblox.com
wrote:

 I don't think nixos-install is that complex. Almost all of the
 initialisation it
 does in the target file system is to make nix-build work in the chroot. The
 NixOS initialisation is done by the activation script.

 Check out nixos-container create - it does almost no initialisation of
 the
 container file system, since everything is done by the activation script
 during
 the first boot of the container.



What I find missing is *creation* of the actual partitions, file systems
etc. integrated into the system startup.
This can be accomplished as an idempotent initialization and also be turned
off by default (require boot flag).

Alexander
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Proposal: Standard installation procedure

2014-10-21 Thread Alexander Kjeldaas
On Tue, Oct 21, 2014 at 10:27 AM, Domen Kožar do...@dev.si wrote:



 What I find missing is *creation* of the actual partitions, file systems
 etc. integrated into the system startup.
 This can be accomplished as an idempotent initialization and also be
 turned off by default (require boot flag).


 Did you see
 https://github.com/aszlig/nixpart/wiki/Device-tree-representation


No I didn't.  Thanks for the reference, that's great!  Any reason why the
nixos module in the documentation is not part of the master branch?

Alexander
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Our policy for upgrading haskellPackages

2014-10-15 Thread Alexander Kjeldaas
Regarding the network/network-uri split, tibbe has promised to backport
things to 2.5.

https://github.com/haskell/network/commit/fba98d81bf733bb769316b86b6675011165e59f0#commitcomment-7996263

Alexander

On Wed, Oct 15, 2014 at 9:27 PM, John Wiegley jo...@newartisans.com wrote:

 This message is a follow-on to a discuss Peter and I were having on GitHub,
 since I believe it is of more general interest:

  Peter Simons writes:

  Generally speaking, the two goals
 
1. have recent versions of all major Haskell packages and
2. all Haskell packages should compile
 
  are contradictory. The 2.6.x version of network has been out there since
 Tue
  Sep 2 18:14:36 UTC 2014, i.e. for more than 1,5 months. Since 2.5.x and
  2.6.x have incompatible APIs, many package authors don't bother
 supporting
  the old version: they update their packages to compile against 2.6.x and
  never look back. Now, in that situation, we must switch to 2.6.x
 eventually,
  because network 2.5.x cannot compile many available updates. At the same
  time, the switch to 2.6.x breaks the packages of all those people who
 didn't
  update their libraries.
 
  So what are we supposed to do? Forgo the available updates to keep a
 stable
  system or update at the cost of breaking packages that are sort-of
  unmaintained?
 
  I try to keep as many packages building as possible, and getting those
 ~200
  updates into master was a major effort for me, i.e. I worked on those
  commits several hours per day for the better part of a week. Even with
 all
  that effort spent, however, I cannot remedy the fundamental conflict of
  interest between a system that's up-to-date and a system that's stable.
 At
  some point, I just push whatever I have come up with and I rely on other
  people, like yourself, to help finding the best balance between those two
  contradictory goals.

 Hi Peter,

 First, let me state how much I appreciate the contribution you're making to
 nixpkgs.  Its support of Haskell is superb, and that is in large part due
 to
 your time and effort.  My hope is to support you as best I can, and not to
 criticize your efforts in any way.

 You are exactly right that we have a tension between those two goals.  I
 can
 think of two things that might be done to remedy this, and perhaps make
 updates to master more smooth:

   1. We keep a dedicated branch, haskell-updates, to which only your
 Hackage
  updates get pushed, or fixes to those updates.  I will personally pull
  and rebuild this branch every day on my machine, just as I presently
  rebuild master nearly every day -- compiling more than 2,000 packages
  that I keep locally updated through --leq.

  I (and hopefully others) will help to discover which packages can be
  fixed by inserting references to older packages, which requires
 patches,
  and which must truly be marked as broken until the maintainer of
 that
  package can be contacted.

  Further, I'll help you to maintain a list of outstanding broken
  packages, and see what can be done to make sure this list decreases
 over
  time.

   2. The second option is to create a new haskellPackages set, called
  'stackage'.  The Stackage maintainers already do a lot of the work
  implied by #1, ensuring that every package within the Stackage set can
  build together.  Further, they only upgrade a package once they've
 either
  created a patch, or worked with upstream to update the package.

  Of course, the downside to this is:

- less frequent updates of packages
- a smaller available package set
- life-draining maintenance of a mostly parallel package set

  The upside being that all patching/curating work is done for us,
 likely
  for as long as FP Complete keeps funding people to maintain Stackage.

 Most of the time I can resolve breakages that occur on master, and I'm
 getting
 up to speed with pushing the right fixes back to you via cabal2nix.
 However,
 I still rely on 'master' to be working overall on a daily basis, and
 sometimes
 the degree of breakage in haskellPackages is too much to handle all at
 once,
 forcing me to stop tracking 'master' -- which then delays my involvement in
 getting those breakages fixed.

 I think if we had a separate channel for haskell updates, and that if you
 and
 I both worked together to get that channel ready for inclusion into
 master, we
 could make this upgrade effort smoother for everyone involved, and
 hopefully
 less stressful for you in particular.

 The only important part, then, is that we be sure this branch gets on
 Hydra,
 as another check of suitability.

 It would also be really nice to see you on IRC more, for asking question
 about
 upgrade decisions more quickly than through GitHub.  But I understand if
 that's not possible.

 Yours,
   John
 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 

Re: [Nix-dev] nix on compute cluster?

2014-10-11 Thread Alexander Kjeldaas
On Fri, Oct 10, 2014 at 7:06 PM, Andreas Herrmann andreas...@gmx.ch wrote:

 On Friday 10 October 2014 17:49:20 Wout Mertens wrote:
  On Fri, Oct 10, 2014 at 4:34 PM, Andreas Herrmann andreas...@gmx.ch
 wrote:
   On Friday 10 October 2014 15:32:52 Wout Mertens wrote:
I think you could do this. You would set it up so the nix server
 does the
compiles and the grid runs distcc. See the wiki, the raspberry pi
 page
   has
explanations about distcc.
   Oh, I didn't know that this worked outside of NixOS. I just can't find
 any
   details on how to integrate distcc with sge. Do you have any experience
   with that?
  No sorry, but I'd just install distcc as a daemon on all nodes or else
 use
  sge for the distribution somehow... pretend that they're super long
 running
  batch jobs...
 I think an on demand approach would fit better for the current usage
 pattern than a permanently running build server. Looking at distcc more
 closely, it seems like a too low level approach. E.g. it wouldn't cover
 clang, or icc (once in nix) compilations.

 The sun grid engine allows for scheduled interactive remote shells. It
 also allows to submit any random shell script, or sequence of commands as a
 batch job. Shouldn't there be a way to make nix run its builders as such
 jobs?

 To my understanding userHook is too late. It's only executed inside the
 builder shell, and can't be used to open a remote shell session that
 executes the builder, correct?



For a compute grid, where no graphics, no desktop, no kde etc. are
compiled, a full NixOS build is in the hours, not days.   I would simply
compile this on a single server and rdist,  No need to over-engineer this.

Alexander
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Channel update knocks my box offline

2014-10-07 Thread Alexander Kjeldaas
The command might be racey.  Maybe the default route is added before it is
removed.

Alexander

On Tue, Oct 7, 2014 at 2:39 PM, Roger Qiu roger@polycademy.com wrote:

 So this command is sometimes not being ran?

 systemctl restart network-setup
 On 30/09/2014 6:14 PM, Mateusz Kowalczyk fuuze...@fuuzetsu.co.uk
 wrote:

 On 09/23/2014 07:02 AM, Mateusz Kowalczyk wrote:
  Most recent nixos-unstable channel move knocks my box offline somehow. I
  can reach my local network but nothing on the outside. My network
  config[1] is pretty simple. I noticed this few days ago when I tried to
  switch to master but had no time at that moment to pursue.
 
  Considering this and the Grub problem in the other thread, were the
  tests switched off for this channel move or something?
 
  Apparently --rollback felt the need to kill my X session too ;(
 
  [1]:
 
 https://github.com/Fuuzetsu/nix-project-defaults/blob/master/nixos-config/configuration.nix#L73
 

 With some help on IRC, the mystery was solved. The default route was not
 being set so after switch I had to manually run ‘systemctl restart
 network-setup’, which the switch should have done itself.

 --
 Mateusz K.
 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Channel update knocks my box offline

2014-09-23 Thread Alexander Kjeldaas
I think I have seen this on a bare minimum machine.  In my case I suspect
something close to the firewall restart.


Alexander

On Tue, Sep 23, 2014 at 9:46 PM, Mateusz Kowalczyk fuuze...@fuuzetsu.co.uk
wrote:

 On 09/23/2014 08:46 PM, Mateusz Kowalczyk wrote:
  On 09/23/2014 01:47 PM, Vladimír Čunát wrote:
  On 09/23/2014 08:02 AM, Mateusz Kowalczyk wrote:
  Apparently --rollback felt the need to kill my X session too ;(
 
  Was it xfce? My experience is that xfce4-session quits when dbus is
  restarted, which happens whenever dbus gets changed on --switch.
 
  Vladimir
 
 
 
  No, I run no DM with just XMonad.
 

 s/DM/DE/

 --
 Mateusz K.
 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev

___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Keeping nixpkgs up to date

2014-09-02 Thread Alexander Kjeldaas
With the awesome monitor.nixos.org system, we're prety close to 1) deriving
trivial patches (update version and sha256) automatically, 2) building
these trivial patches in a monitor.nixos.org-controlled branch, and 3)
notifying maintainers that a successful build of a new version exists so
they can 4) expose a particular commit as a pull request.

I have a feeling that 97% of the updates could be handled like this,
bringing the maintainance job down to a couple of mouse clicks.

Alexander



On Mon, Sep 1, 2014 at 1:24 AM, Roger Qiu roger@polycademy.com wrote:

 Many other package systems are decentralised (Gems/Composer/PyPI/NPM).

 Why not make Nix packages decentralised? So that maintainers can
 maintain their own packages and update them at any time? This would
 speed up evolution of Nix packages.

 One problem to solve is how do we make sure that unresponsive
 maintainers can be replaced by responsive maintainers.

 At this point, the all-packages.nix file will grow bigger and bigger. If
 maintainers can independently update their packages, which might
 introduce more bugs, I think there will need to be a stringent
 tagging/semantic versioning of each package, so that its possible to
 have many versions of the same package.

 If the package update process is kept the same, the more people that use
 nix, the more people who contribute to nix, the more work to accept
 pull-requests, which would have to mean an increase in the number of
 people who have the privilege to merge pull-requests. Otherwise there's
 going to be an increasing amount of work for a constant number of people.

 Thanks,
 Roger

 On 1/09/2014 12:43 AM, Chris Double wrote:
  Speed of processing pull requests for new packages is an issue.
  Anything that can be done to reduce this would be helpful. It's
  demotivating as a contributor to do what seems to be a simple package
  update of a minor version and have the pull request take weeks.
 
  When I first started using NixOS the tor package was way out of date
  so I updated it. That went pretty quickly. 3 months ago I did a pull
  request to update to a recent tor minor release on unstable. This went
  through ok. I waited a couple of weeks for testing then did a pull
  request to get it in 14.04;
 
  https://github.com/NixOS/nixpkgs/pull/3136
 
  Updating Tor on 14.04 to version 0.2.4.22 and Tor Browser to 3.6.2.
  This has been sitting for two months. Since then a newer version of
  Tor and Tor Browser has come out so it's already out of date. I
  haven't bothered trying to do a pull request to update to the new
  version as there seems no point given that processing pull requests
  must be overloaded.
 
  I can see this only getting worse as more people do pull requests for
  package updates.
 
  New packages are no doubt worse since it takes more analysis of the
  pull request for someone to approve it.
  ___
  nix-dev mailing list
  nix-dev@lists.science.uu.nl
  http://lists.science.uu.nl/mailman/listinfo/nix-dev

 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev

___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Keeping nixpkgs up to date

2014-09-02 Thread Alexander Kjeldaas
On Tue, Sep 2, 2014 at 10:52 AM, Michael Raskin 7c6f4...@mail.ru wrote:

 With the awesome monitor.nixos.org system, we're prety close to 1)
 deriving
 trivial patches (update version and sha256) automatically, 2) building
 these trivial patches in a monitor.nixos.org-controlled branch, and 3)
 notifying maintainers that a successful build of a new version exists so
 they can 4) expose a particular commit as a pull request.
 
 I have a feeling that 97% of the updates could be handled like this,
 bringing the maintainance job down to a couple of mouse clicks.

 I am a package maintainer and can look up…

 irrelevant is for stable-to-unstable with existing unstable expression
 and for cross-branch upgrades.

   1 not sure if 0.8.0.3 is relevant on linux
   1 patch OK
   2 hard to say
   2 not linked on homepage
   3 build failure
  11 patch ok
  12 build pending
  23 irrelevant
  29 no patch


What does no patch mean? (a new release without a patch or a tarball
makes no sense)
irrelevant could be baked into monitor.nixos.org, if it is non-security
related.
not linked on homepage means that monitor.nixos.org missed it?
hard to say??

It seems like you have 3 build failures and 2 not linked on homepage
that could not be automated.  Am I reading your numbers correctly?

Alexander



 So, happiness is, as usual, delayed… Although I should probably apply
 this dozen of patches.




___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] systemd in initrd

2014-08-22 Thread Alexander Kjeldaas
On Fri, Aug 22, 2014 at 5:34 PM, Luca Bruno lethalma...@gmail.com wrote:

 On 22/08/2014 17:28, Nicolas Pierron wrote:
  I am just saying, that I do not see why we could not use the jobs
  syntax on top of a string-dependency system which is used by the
  activation script. Systemd solves job dependencies dynamically to
  benefit from the kernel scheduling, while the activation scripts are
  concatenated ahead to make a single  simple activation process. I
  think there is no need to always bring the complexity of systemd
  to the init process, this could be optional. What I suggest is to have
  a 2 backends for the init process. The systemd one, and the
  string-dependency one. Of course, the string-dependency backend would
  have to assert (while building the system) about cases which cannot be
  handled.
 So you want to parse systemd nixos modules in a restricted mode and
 concatenate them? Yes, that makes sense. However, I'm not sure whether
 that's less work than adding systemd to initrd or it ends up with extra
 complexity and hidden bugs in the translation.
 Additionally, once systemd is started, it's not stopped and then
 restarted after switching root. It will be reloaded.

 So it's correct that the complex dependency resolution of systemd is
 overkill, it's also true that systemd is a component that will be
 started anyway at some point.


How will actually building the initrd be improved?  I feel that the
dependency resolution is only half of the problem. Things like this is the
other half - manual copying of libraries and binaries to minimize initrd
size:

https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/system/boot/luksroot.nix#L408

For my use I don't care whether initrd is large, but making systemd
services portable to initrd by copying their closures will probably
affect initrd size a lot more than systemd itself.

Alexander
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Zero Hydra Failures (ZHF) project

2014-08-08 Thread Alexander Kjeldaas
On Fri, Aug 8, 2014 at 12:39 PM, Luca Bruno lethalma...@gmail.com wrote:

 I've launched this Zero Hydra Failures (ZHF) project. Details at
 https://nixos.org/wiki/Zero_Hydra_Failures

 The hydra instance at nixos.org has lots of build failures, it's a huge
 percentage over the total. The aim of this project is to drop failures to
 zero.

 I invite all contributors to take some time reading the project page.
 Trying to build a package, fixing or marking it as broken takes less time
 that you might imagine. It's not as time-expensive as it would be for a
 package that you are interested in.


I'd like to float an alternative path.  We know that the output of a
successful build is a set of files.

But what is the output of a failed build?

I suggest that it should be a suggestion for a fix.  When a build breaks,
it shouldn't just stop dead.  Rather, the build system should call an
exception handler that over time could become pretty sophisticated.

For example, for the failed python builds, this exception handler could
check whether a successful build exists for another python version.  If so,
suggest a patch that blacklists the python  version that was currently
used.   A general rule could be that if it builds successfully on another
platform, blacklist this platform.

These suggestions could be full-fledged git branches ready for a pull
request, commits or just edits in a local nixpkgs tree.

This is probably some work, but some perl hacks that work for certain
regular nix expression layouts could potentially work for a lot of packages.

To simplify auto-editing, a fairly strict syntax could be used for certain
meta-data so that simple perl regexps + grep would be enough for most of
the blacklisting/whitelisting work.

Alexander



 Best regards,


 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Openssl and fast security updates

2014-06-06 Thread Alexander Kjeldaas
On Fri, Jun 6, 2014 at 10:20 AM, Vladimír Čunát vcu...@gmail.com wrote:

 On 06/06/2014 08:59 AM, Ertugrul Söylemez wrote:

 When we use priorities generously we could avoid a lot of delay even in
 less critical cases.


 The main problem I see is that normally you don't want to release a
 channel until *all* parts have rebuilt.


+1 Rebuilding for a server that runs, say ssh, apache, nginx, postfix and a
few such services takes maybe 2% of the time required to build a full
desktop distribution.

I think being able to release packages used on public facing servers could
be prioritized over, say LibreOffice, Qt, Webkit etc.

If the system environment is not polluted by the desktop packages, it
could be possible to upgrade the system environment before user
environments that needs one or two orders of magnitude more time to compile.

Calculating the transitive closure for all nixos modules / services run by
systemd is one way to prioritize.  A populatiry contest could be added to
that.

Alexander



 We do have meta.schedulingPriority, but it's used little, and from earlier
 discussions I think it's really hard to objectively determine which
 packages are more important than others ;-)

 BTW, aborting jobs would need to be done very carefully, because we have
 some jobs that run for hours, so that could lead to wasting lots of time.


 Vlada



 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Unfree packages in Nixpkgs

2014-04-09 Thread Alexander Kjeldaas
On Wed, Apr 9, 2014 at 11:43 AM, Eelco Dolstra
eelco.dols...@logicblox.comwrote:

 Hi all,

 tl;dr version: The default value of the ‘allowUnfree’ Nixpkgs
 configuration flag
 has changed from ‘true’ to ‘false’. This means that if your NixOS system
 configuration requires unfree packages, you need to add

   { nixpkgs.config.allowUnfree = true; }

 to your configuration.nix.  If you're using Nixpkgs standalone, you need
 to add

   { config.allowUnfree = true; }

 to ~/.nixpkgs/config.nix, or pass

   --arg config '{ allowUnfree = true; }'

 on the command line. (Also, note that unfree packages don't even show up in
 ‘nix-env -qa’ unless you have allowUnfree enabled.)

 Longer version: Nixpkgs has since the beginning been intended as a free
 software
 distribution, but the use of unfree packages is sometimes hard to avoid
 (for
 example, the NVIDIA X11 drivers), so Nixpkgs has always included some
 unfree
 packages. This is problematic, because it can cause a NixOS user to
 accidentally
 specify a system configuration that contains unfree packages, which might
 lead
 to unexpected legal problems. For example, the license of the Teamspeak
 server
 package disallows renting Teamspeak servers, so a NixOS configuration that
 enables it has limitations that the user may not be aware of.

 To deal with this problem has long been a contentious matter. One option
 was to
 move unfree packages into a separate repository, but that's annoying
 because 1)
 it's more work for users to deal with multiple repositories; 2) it creates
 an
 inter-repository dependency management issue.

 So the present solution is to:

 * Allow unfree packages in the Nixpkgs repository, provided that they have
 a
 meta.license attribute with a value of unfree or
 unfree-redistributable.

 * Throw an exception if you try to evaluate such a package, unless the
 Nixpkgs
 configuration option ‘allowUnfree’ is set to true.

 This means that we can keep unfree packages in the Nixpkgs repository, but
 users
 don't get exposed to them unless they explicitly ask for it.  For example:

   $ nix-env -iA nixos.pkgs.skype
   error: package ‘skype-4.2.0.13’ in ‘.../skype/default.nix:68’ has an
 unfree
 license, refusing to evaluate


This error message is pretty unhelpful.  Maybe an url or at least the word
'allowUnfree' should be mentioned to help the poor user who stumbles upon
it.

Alexander


 but

   $ nix-env -iA nixos.pkgs.skype --arg config '{ allowUnfree = true; }'

 does work.

 Note that the allowUnfree flag does not affect packages with the
 unfree-redistributable-firmware license, which are always allowed.  This
 is
 because banning firmware packages is really too masochistic at present.

 --
 Eelco Dolstra | LogicBlox, Inc. | http://nixos.org/~eelco/
 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev

___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Accidental force push to nixpkgs

2014-04-08 Thread Alexander Kjeldaas
It is possible to add a post receive hook that sends out a big fat warning.

Alexander


On Tue, Apr 8, 2014 at 8:14 AM, Vladimír Čunát vcu...@gmail.com wrote:

 On 04/07/2014 07:30 PM, Shea Levy wrote:

 I had my remotes set up wrong and accidentally force pushed


 I see it happens occasionally... is it possible on GitHub to forbid
 force-pushes? I don't know about any use case of non-fast-forward pushing
 on the central repo.

 Vlada



 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Accidental force push to nixpkgs

2014-04-08 Thread Alexander Kjeldaas
Yes, the problem seems to be that github does not actually execute hooks,
but allows webhooks.  The hooks seems to be async, and thus pre-receive
isn't supported.

Alexander


On Tue, Apr 8, 2014 at 8:37 AM, Vladimír Čunát vcu...@gmail.com wrote:

 On 04/08/2014 08:27 AM, Alexander Kjeldaas wrote:

 It is possible to add a post receive hook that sends out a big fat
 warning.


 Hmm, it seems better to abort if possible (and print the warning), because
 after it's done, the non-ff pusher typically doesn't even know/have the
 commits that were there before, so I see no easy way to undo it instantly.

 Vlada



 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] NiJS package manager

2014-04-01 Thread Alexander Kjeldaas
I think this is the wrong way to go. The bootstrap size for JavaScript is
huge with nodejs depending on the world. Nix is relatively small which is
nice. It's far from optimal though.

Haskell is also huge, but there are a few languages with tiny footprints. I
suggest we look at ash or maybe zsh. I think keeping the core as small as
possible is worth the extra verbosity of using a strict language.

My 2 cents.

Alexander
On Apr 1, 2014 12:11 PM, Sander van der Burg - EWI 
s.vanderb...@tudelft.nl wrote:

  Hello Nixers,

 After a year of hard work, I proudly want to present you NiJS: the
 asynchronous package manager.

 In NiJS, you can use the more popular, innovating and future proof
 JavaScript language to specify package build specifications while still
 having most of the useful goodies that Nix has.

 Furthermore, because it's asynchronous and I/O events are non-blocking,
 it's also very fast and highly scalable.

 More info:


 http://sandervanderburg.blogspot.com/2014/04/asynchronous-package-management-with.html

 Best,

 Sander


 ___
 nix-dev mailing list
 nix-dev@lists.science.uu.nl
 http://lists.science.uu.nl/mailman/listinfo/nix-dev


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev