Re: Daemon update

2015-06-01 Thread Alexander Vorobiev
At first I saw the exact same error in this list:

http://lists.gnu.org/archive/html/bug-guix/2014-12/msg2.html

I then pulled again, rebooted the VM just in case, reinstalled guix, and
now I am seeing an error similar to the second error from that email:

ERROR: gapplication - missing test plan
ERROR: gapplication - exited with status 139 (terminated by signal 11?)

That email had status 136 and signal 6.

Thanks,
Alex

On Mon, Jun 1, 2015 at 3:23 PM, Alexander Vorobiev <
alexander.vorob...@gmail.com> wrote:

> Yes it is, happens every time I run it (make guix-binary). Could it be
> custom store and/or local cache? Or wrong version of something in the
> system I am using (latest Arch linux)?
>
> Thanks,
> Alex
>
> On Mon, Jun 1, 2015 at 2:58 PM, Ludovic Courtès  wrote:
>
>> Alexander Vorobiev  skribis:
>>
>> > That fixed tcsh, thanks. Here is the next stop - glib (libgio), seems
>> to be
>> > failing its unit tests:
>> >
>> > 
>> > PASS: defaultvalue 80 /Default Values/GZlibCompressor
>> > PASS: defaultvalue 81 /Default Values/GZlibDecompressor
>> > tap-driver.sh: internal error getting exit status
>> > tap-driver.sh: fatal: I/O or internal error
>> > Makefile:3751: recipe for target 'defaultvalue.log' failed
>> > make[8]: *** [defaultvalue.log] Error 1
>> > make[8]: Leaving directory
>> > '/tmp/nix-build-glib-2.44.0.drv-0/glib-2.44.0/gio/tests'
>>
>> Is that deterministic?  I.e., does it happen if you run the build again?
>> We haven’t experienced it on hydra.gnu.org.
>>
>> Thanks,
>> Ludo’.
>>
>
>


Re: Guix in single-user mode (degenerated?)

2015-06-01 Thread Pjotr Prins
On Mon, Jun 01, 2015 at 09:56:42PM +0200, Ludovic Courtès wrote:
> > There are situations where I can't get root - mostly on scientific
> > cluster environments. I don't think it is a priority, but it would be
> > a good to have if it works.
> 
> Yes, that’s an issue.  Unfortunately, users would sacrifice
> reproducibility by running the daemon as non-root (and we would get
> reports about builds that fail unexpectedly), and using a non-standard
> store directory may also hit shebang length limits and so on.
> 
> So I think we must bribe a bunch of HPC sysadmins ;-), get experience
> with real system-wide installations on such systems, and see what needs
> to be adjusted.  Thanks to Ricardo and his lab we already have some
> feedback: .

It is a good writeup, indeed, and may help make a case. Somehow the
people who run clusters are not so enlightened.

> If you are in a position where you can experiment with such a setup,
> your experience would be invaluable!

I am not in a position to change things, but maybe others are. The
good news is that VMs are becoming more common and may drive change.
Give it another 5-10 years and I expect it to be less of an issue.

Pj.



[PATCH] Add a ‘verifyStore’ RPC

2015-06-01 Thread Ludovic Courtès
Hello!

The patch below adds a ‘verifyStore’ RPC with the same signature as the
current LocalStore::verifyStore method.

Thanks,
Ludo’.

>From aef46c03ca77eb6344f4892672eb6d9d06432041 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ludovic=20Court=C3=A8s?= 
Date: Mon, 1 Jun 2015 23:17:10 +0200
Subject: [PATCH] Add a 'verifyStore' remote procedure call.

---
 src/libstore/remote-store.cc| 10 ++
 src/libstore/remote-store.hh|  1 +
 src/libstore/store-api.hh   |  4 
 src/libstore/worker-protocol.hh |  3 ++-
 src/nix-daemon/nix-daemon.cc| 10 ++
 5 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/src/libstore/remote-store.cc b/src/libstore/remote-store.cc
index 3b2825c..ab87d9d 100644
--- a/src/libstore/remote-store.cc
+++ b/src/libstore/remote-store.cc
@@ -587,6 +587,16 @@ void RemoteStore::optimiseStore()
 readInt(from);
 }
 
+bool RemoteStore::verifyStore(bool checkContents, bool repair)
+{
+openConnection();
+writeInt(wopVerifyStore, to);
+writeInt(checkContents, to);
+writeInt(repair, to);
+processStderr();
+return readInt(from) != 0;
+}
+
 void RemoteStore::processStderr(Sink * sink, Source * source)
 {
 to.flush();
diff --git a/src/libstore/remote-store.hh b/src/libstore/remote-store.hh
index 14209cb..030120d 100644
--- a/src/libstore/remote-store.hh
+++ b/src/libstore/remote-store.hh
@@ -85,6 +85,7 @@ public:
 
 void optimiseStore();
 
+bool verifyStore(bool checkContents, bool repair);
 private:
 AutoCloseFD fdSocket;
 FdSink to;
diff --git a/src/libstore/store-api.hh b/src/libstore/store-api.hh
index 97a60a6..3764f3e 100644
--- a/src/libstore/store-api.hh
+++ b/src/libstore/store-api.hh
@@ -254,6 +254,10 @@ public:
 /* Optimise the disk space usage of the Nix store by hard-linking files
with the same contents. */
 virtual void optimiseStore() = 0;
+
+/* Check the integrity of the Nix store.  Returns true if errors
+   remain. */
+virtual bool verifyStore(bool checkContents, bool repair) = 0;
 };
 
 
diff --git a/src/libstore/worker-protocol.hh b/src/libstore/worker-protocol.hh
index 4b040b7..d037d74 100644
--- a/src/libstore/worker-protocol.hh
+++ b/src/libstore/worker-protocol.hh
@@ -42,7 +42,8 @@ typedef enum {
 wopQueryValidPaths = 31,
 wopQuerySubstitutablePaths = 32,
 wopQueryValidDerivers = 33,
-wopOptimiseStore = 34
+wopOptimiseStore = 34,
+wopVerifyStore = 35
 } WorkerOp;
 
 
diff --git a/src/nix-daemon/nix-daemon.cc b/src/nix-daemon/nix-daemon.cc
index bed7de0..b3552a9 100644
--- a/src/nix-daemon/nix-daemon.cc
+++ b/src/nix-daemon/nix-daemon.cc
@@ -519,6 +519,16 @@ static void performOp(bool trusted, unsigned int clientVersion,
 writeInt(1, to);
 break;
 
+case wopVerifyStore: {
+	bool checkContents = readInt(from) != 0;
+	bool repair = readInt(from) != 0;
+	startWork();
+	bool errors = store->verifyStore(checkContents, repair);
+	stopWork();
+	writeInt(errors, to);
+	break;
+}
+
 default:
 throw Error(format("invalid operation %1%") % op);
 }
-- 
2.2.1



Re: Daemon update

2015-06-01 Thread Alexander Vorobiev
Yes it is, happens every time I run it (make guix-binary). Could it be
custom store and/or local cache? Or wrong version of something in the
system I am using (latest Arch linux)?

Thanks,
Alex

On Mon, Jun 1, 2015 at 2:58 PM, Ludovic Courtès  wrote:

> Alexander Vorobiev  skribis:
>
> > That fixed tcsh, thanks. Here is the next stop - glib (libgio), seems to
> be
> > failing its unit tests:
> >
> > 
> > PASS: defaultvalue 80 /Default Values/GZlibCompressor
> > PASS: defaultvalue 81 /Default Values/GZlibDecompressor
> > tap-driver.sh: internal error getting exit status
> > tap-driver.sh: fatal: I/O or internal error
> > Makefile:3751: recipe for target 'defaultvalue.log' failed
> > make[8]: *** [defaultvalue.log] Error 1
> > make[8]: Leaving directory
> > '/tmp/nix-build-glib-2.44.0.drv-0/glib-2.44.0/gio/tests'
>
> Is that deterministic?  I.e., does it happen if you run the build again?
> We haven’t experienced it on hydra.gnu.org.
>
> Thanks,
> Ludo’.
>


Re: Suggest add FAQ documents about integrating with other linux distribution.

2015-06-01 Thread Andreas Enge
On Wed, May 27, 2015 at 10:15:23PM +0200, Ludovic Courtès wrote:
> Andreas Enge  skribis:
> >> Currently, fontconfig from Guix do not use '/etc/fonts/fonts.conf',
> >> and hardcode ~/.guix-profile/share/fonts (which think is a bad idea).
> > It has the advantage of working out-of-the-box for Guix on top of another
> > distro.
> But then by default it doesn’t use any of the host distro fonts, right?

I suppose not. Maybe my debian programs now use debian fonts, and my guix
programs use guix fonts, and both work.

> (I’m stealthily trying to get someone to write an answer that can be
> copied in the manual:
> .

Well, since I do not know what happens, but am simply happy with my setup
that does seem to work for me, I am not going to add caveats to the manual!

Andreas




Re: Merging ‘HACKING’ in the manual?

2015-06-01 Thread Ricardo Wurmus
> We should aim to blur the distinction between developers and users, and
> I think we have the right technical environment for that
> (“configuration” files that happen to actually be Scheme, “package
> recipes” that happen to be Scheme as well, etc.)  So I’m in favor of
> keeping the technical information all grouped together.

I agree.  Keeping it all together also encourages users to make use of
their software freedom to modify recipes, add their own and possibly
submit them upstream.

>> I don't have a strong opinion.  It might be a separate “Hacking” (or
>> whatever) section in the current manual or another “maint” info manual.
>> Both solutions look good to me.  I have a slight preference for the
>> first variant though.
>
> Yes, that’s also my current inclination.

I, too, would prefer to have it as a section in the current manual, but
that's primarily because I never remember where I can find the
information I'm looking for.  It's easier to have it in the same manual,
organised in an order that goes from casual user to contributor.

~~ Ricardo




Re: Daemon update

2015-06-01 Thread Ludovic Courtès
Alexander Vorobiev  skribis:

> That fixed tcsh, thanks. Here is the next stop - glib (libgio), seems to be
> failing its unit tests:
>
> 
> PASS: defaultvalue 80 /Default Values/GZlibCompressor
> PASS: defaultvalue 81 /Default Values/GZlibDecompressor
> tap-driver.sh: internal error getting exit status
> tap-driver.sh: fatal: I/O or internal error
> Makefile:3751: recipe for target 'defaultvalue.log' failed
> make[8]: *** [defaultvalue.log] Error 1
> make[8]: Leaving directory
> '/tmp/nix-build-glib-2.44.0.drv-0/glib-2.44.0/gio/tests'

Is that deterministic?  I.e., does it happen if you run the build again?
We haven’t experienced it on hydra.gnu.org.

Thanks,
Ludo’.



Re: PATCH: LibreOffice

2015-06-01 Thread Andreas Enge
On Mon, Jun 01, 2015 at 09:50:15PM +0200, Andreas Enge wrote:
> Now that I deleted my which.go, I need to recompile with the which from base.
> So this will be the build process for tonight

Actually, I was wrong - "which" moved to a new place, but stayed the same,
so there is no recompilation of things that depend on it!

Andreas




Re: Guix in single-user mode (degenerated?)

2015-06-01 Thread Ludovic Courtès
Pjotr Prins  skribis:

> Q1: How does single-user degenerated mode actually work? I suppose
> 'degenerated' refers to the fact that all packages have to be rebuilt
> (non-binary install) and/or that it is single threaded? 

I think you mean “degraded”, but there’s no such mode, where did you
read about it?  :-)

The last paragraph at

mentions use of the daemon by non-root users, and suggests that this is
the last resort.

> Q2: How do I use it?

guix-daemon --disable-chroot

> Q3: Anyone using it right now?

My guess is that it’s not widely used.

> There are situations where I can't get root - mostly on scientific
> cluster environments. I don't think it is a priority, but it would be
> a good to have if it works.

Yes, that’s an issue.  Unfortunately, users would sacrifice
reproducibility by running the daemon as non-root (and we would get
reports about builds that fail unexpectedly), and using a non-standard
store directory may also hit shebang length limits and so on.

So I think we must bribe a bunch of HPC sysadmins ;-), get experience
with real system-wide installations on such systems, and see what needs
to be adjusted.  Thanks to Ricardo and his lab we already have some
feedback: .

If you are in a position where you can experiment with such a setup,
your experience would be invaluable!

Ludo’.



Re: PATCH: LibreOffice

2015-06-01 Thread Andreas Enge
On Sun, May 31, 2015 at 01:42:30PM -0400, Mark H Weaver wrote:
> Please do (version-prefix version 3) instead.  You'll need to import
> (guix utils) for it.

Very neat! I knew about version-major+minor, but not this one.

On Sun, May 31, 2015 at 10:32:39PM +0200, Ludovic Courtès wrote:
> I think we should try hard to not make Libreoffice depend on
> Autoconf/Automake; it’s always a good idea to avoid it, but even more so
> here given the build time and size of the thing.  :-)
> Does that seem doable?

I will try it out. Given the build time and the fact that parallel builds
do not work (but they seemed to have worked for John last year, I have no
idea why there is a problem now), things advance at a very leisurely pace.
Now that I deleted my which.go, I need to recompile with the which from base.
So this will be the build process for tonight, tomorrow, I can try out
something else :-)

On Mon, Jun 01, 2015 at 02:52:05PM +0200, John Darrington wrote:
> Obviously any exercise in Free Software is "doable" (we have the source code!)
> but, my experience in packaging for Guix tells me that it can be very easy to
> fall into the trap of making changes to the upstream packages which 
> effectively
> forks them.  Then we become the maintainer of a forked package.

Of course that is something we should avoid. With a bit of luck here, though,
libreoffice will compile with a non-patched xmlsec tarball. After all, we will
not install binaries or scripts from xmlsec (at least I hope so), so maybe
the patch-shebang phase is not really required. We will see.

Andreas




Re: [PATCH] Add HTSlib.

2015-06-01 Thread Ludovic Courtès
Ricardo Wurmus  skribis:

> From 708093dc001d46a2ec487873e8150861b05872da Mon Sep 17 00:00:00 2001
> From: Ricardo Wurmus 
> Date: Mon, 1 Jun 2015 15:06:04 +0200
> Subject: [PATCH] gnu: Add HTSlib.
>
> * gnu/packages/bioinformatics.scm (htslib): New variable.

OK, thanks!

Ludo’.



Re: PATCH: LibreOffice

2015-06-01 Thread Ludovic Courtès
John Darrington  skribis:

> Obviously any exercise in Free Software is "doable" (we have the source 
> code!) 
> but, my experience in packaging for Guix tells me that it can be very easy to
> fall into the trap of making changes to the upstream packages which 
> effectively
> forks them.  Then we become the maintainer of a forked package.

Apart from elogind, I can’t think of any package that Guix provides that
would qualify as “forked.”

Ludo’.



Re: [PATCH] Attempt to fix build of sra-tools on i686.

2015-06-01 Thread Ludovic Courtès
Ricardo Wurmus  skribis:

> From 6d3e22354c767f76711e3f73a2e26d7d7783f57a Mon Sep 17 00:00:00 2001
> From: Ricardo Wurmus 
> Date: Mon, 1 Jun 2015 11:31:29 +0200
> Subject: [PATCH] gnu: ncbi-vdb: Use "i386" instead of "i686" in directory
>  name.
>
> * gnu/packages/bioinformatics.scm (ncbi-vdb)[arguments]: Copy libraries from
>   "linux/gcc/i386" directory instead of "linux/gcc/i686" when building on
>   i686.

This is terrible!  :-)  Could you make the explanation a comment in the source?

Otherwise LGTM!

Ludo’.



Guix deploy (and replace Puppet/Chef)

2015-06-01 Thread Pjotr Prins
(changed subject)

On Mon, Jun 01, 2015 at 12:49:31PM -0400, Thompson, David wrote:
> > Are you envisaging something like that? Something that reruns state
> > and updates? It is a lot more complicated because you need a way to
> > define state, modify files, allow for transaction and roll-back.
> > Ideally the system should execute in parallel (for speed) and be
> > ordered (i.e. if two methods change the same file the order matters).
> > BTW cfruby lacks transactions and parallel execution, we could start
> > without.
> 
> Sort of yes, sort of no.  What you are describing sounds quite
> imperative.  

Right.

> In Guix, if we were to re-deploy a configuration to a
> running remote system, we'd do the equivalent of what 'guix system
> reconfigure' does now for local GuixSD machines: an atomic update of
> the current system (changing the /run/current-system symlink).  'guix
> deploy' cannot possibly do a good job of managing non-GuixSD systems
> that just happen to run Guix.  I think it would be better to use the
> existing configuration management tools for that.

OK, this sounds exciting and could certainly work well. I guess
hosts.allow would be an input to an sshd builder, right, so different
configurations become their own subtrees in the store. I like that
idea. hosts.allow (as a complication) is actually part of tcp-wrappers
so it would have to be configured for all tools that it addresses on a
machine. I presume hosts.allow would be stored in the store too.

> > The first step is to define state by creating 'classes' of machines.
> > One class could be deploy 'sshd with IP restriction'. That would be a
> > good use case.
> 
> Are you proposing that we add a formal concept of "classes" in Guix or
> is this just to illustrate an idea?  If the former, I think that pure
> functions that return  objects is far more powerful
> than a primitive class labeling system.
> 
> Hope this helps clarify some things.  Thoughts?

I am not too clear on the OS objects, but maybe I should play with
deploy first. I guess what you are saying is that a machine
configuration is generated by its s-expression(s) so we don't need
classes. Wherever these s-expressions come from is an implementation
issue. Right?

Pj.



Re: Merging ‘HACKING’ in the manual?

2015-06-01 Thread Ludovic Courtès
Luis Felipe López Acevedo  skribis:

> I like GNUnet's use of "handbooks" to separate information. They have
> Installation Handbook, User Handbook, and Developer Handbook. I think
> something similar could work well for Guix.

We should aim to blur the distinction between developers and users, and
I think we have the right technical environment for that
(“configuration” files that happen to actually be Scheme, “package
recipes” that happen to be Scheme as well, etc.)  So I’m in favor of
keeping the technical information all grouped together.

I think the installations sections are fine in the main manual.

The question here is more about the organizational documentation.

Alex Kost  skribis:

> I don't have a strong opinion.  It might be a separate “Hacking” (or
> whatever) section in the current manual or another “maint” info manual.
> Both solutions look good to me.  I have a slight preference for the
> first variant though.

Yes, that’s also my current inclination.

Thanks for your feedback!

Ludo’.



Re: Questions and Notes regarding Offloading and Substitutes

2015-06-01 Thread Ludovic Courtès
Hi!

Sorry for the delay, that’s what you get for writing long messages.  ;-)

Carlos Sosa  skribis:

> What is a little bit confusing is how `guix build` and offloading works.
> To be honest, in the begining, offloading was not that straight forward
> since the manual lacks to point out that `guix offloading` is using
> `lsh`. So there are certain steps that need to be addressed like,
> creating a 'lsh' key and then exporting it to an openssh public key.
> After that you might have a problem with the cyphers available in
> OpenSSH and lsh, which are disabled by default in OpenSSH. After setting
> all of that, builds offloading works flawlessly, I was really happy I
> haven't encountered any problem.

I must say that the fact that lsh was not mentioned was originally
intentional, on the grounds that the implementation would “soon” switch
to Guile-SSH instead (which is compatible with OpenSSH, not lsh.)  But
that hasn’t happened yet, although there’s a branch, wip-guile-ssh.

> Now what I don't get is why offloading mostly works for builds and not
> for guix package installation. I might be confusing what builds are for,
> but shouldn't an already finished build be used by `guix package` when
> installing a package. For example, if I do 'guix build emacs' that will
> build emacs and its dependencies, and I can call later `guix package -i
> emacs`, right? When I do `guix package -i emacs` with the daemon with
> '--max-jobs=0' instead of offloading the compilation and built to any of
> the machines in 'machines.scm', it seems to offload some bits but still
> compile some components in the machine. I think the latter is fine, but
> I consider it's best if they did everything. In my case, I own two small
> netbooks that I would love to offload all of the process of building and
> compiling to a bigger node capable of doing that, but still keeping a
> store and profile in those machines.

Normally everything gets offloaded provided there are available slots on
the target machines (as per the ‘parallel-builds’ field.)

Now, some derivations explicitly ask not to be offloaded (search for
#:local-build? in the code.)  This is typically derivations that do very
little work and produce small files, where the overhead of offloading
would outweigh the gain.

For example, the derivation that builds a profile (when running ‘guix
package -i’) is not subject to offloading.  This may be what you were
observing no?

> I'm having a couple of problems with substitutes, I can't seem to be
> able to send all of the needed substitutes, for let us say the emacs
> package, to the small netbooks from the big node at home. They do grab
> some derivations and other elements, but seem to give me a Nix failure
> from time to time. Are substitutes what I'm ultimately looking for,
> having a build node build everything and send it as a substitute to the
> smaller nodes?

What do you mean by “send all of the needed substitutes”?  Note that
offloading and substitutes are two different mechanisms, even though
they seem similar.

HTH!

Ludo’.



Re: Daemon update

2015-06-01 Thread Alexander Vorobiev
That fixed tcsh, thanks. Here is the next stop - glib (libgio), seems to be
failing its unit tests:


PASS: defaultvalue 80 /Default Values/GZlibCompressor
PASS: defaultvalue 81 /Default Values/GZlibDecompressor
tap-driver.sh: internal error getting exit status
tap-driver.sh: fatal: I/O or internal error
Makefile:3751: recipe for target 'defaultvalue.log' failed
make[8]: *** [defaultvalue.log] Error 1
make[8]: Leaving directory
'/tmp/nix-build-glib-2.44.0.drv-0/glib-2.44.0/gio/tests'
Makefile:3653: recipe for target 'check-TESTS' failed
make[7]: *** [check-TESTS] Error 2
make[7]: Leaving directory
'/tmp/nix-build-glib-2.44.0.drv-0/glib-2.44.0/gio/tests'
Makefile:4353: recipe for target 'check-am' failed
make[6]: *** [check-am] Error 2
make[6]: Leaving directory
'/tmp/nix-build-glib-2.44.0.drv-0/glib-2.44.0/gio/tests'
Makefile:3440: recipe for target 'check-recursive' failed
make[5]: *** [check-recursive] Error 1
make[5]: Leaving directory
'/tmp/nix-build-glib-2.44.0.drv-0/glib-2.44.0/gio/tests'

Thanks,
Alex

On Sun, May 31, 2015 at 2:14 PM, Ludovic Courtès  wrote:

> Alexander Vorobiev  skribis:
>
> > Pulled, restarted. The next stop is tcsh. The tarball at ftp.astron.com
> s
> > gone, replaced with more recent version...
>
> Fixed, thanks.
>
> > One observation: apparently coreutils refuses to be built as root so I
> had
> > to create a build user/group to give to guix-daemon.
>
> Right, and both the manual and guix-daemon warn you against running
> guix-daemon as root without --build-users-group.  :-)
>
> Ludo’.
>


Re: Guix "ops"

2015-06-01 Thread Thompson, David
Hey Pjotr,

On Mon, Jun 1, 2015 at 11:18 AM, Pjotr Prins  wrote:
>> There's also unanswered questions like: How should we keep track of
>> state?  How do we reconfigure already deployed machines?  How do we shut
>> down a deployment and unprovision the resources it used?  Basically, how
>> many hooks does the  record type need to cover everything?
>
> The current 'deploy' basically generates a machine from scratch -
> there is some similar funtionality in Nix. I read the code and it is
> pretty simplistic for now which is possible when you generate a
> machine once.

Yes, the code is simplistic because I have only spent about a day or
two actually writing code for it. You'll notice that the 
data type has many of its fields commented out.  Much work to do.

> What I would like to have is a state-machine that can be rerun within
> an installed deploy in the vein of Puppet/Chef/cfengine but without
> the complicated setup of a client/server model (the server 'model'
> should simply be a git repository with branches).

For persisting state between 'guix deploy' runs, I was going to write
an s-expression to a file.  The exact state written to such a file and
how it will be used will be defined by  objects.  The state
could then be synced among all the workstations that are doing
deployments via whichever mechanism you prefer.  I would like to use
Git, too, but I don't 'guix deploy' to be concerned with such details.

> That means that after 'deploy' we can run the state and it
> checks/updates files that may have been changed. I use something like
> that to run my hosts.allow configuration (for example). When I want to
> add an IP address, I change the state and rerun 'deploy'. In the past
> I used to run cfengine. Later I wrote cfruby/cfenjin which I still run
> today for deployment and updates. For me it is time to write something
> that plays well with GNU Guix.
>
> Are you envisaging something like that? Something that reruns state
> and updates? It is a lot more complicated because you need a way to
> define state, modify files, allow for transaction and roll-back.
> Ideally the system should execute in parallel (for speed) and be
> ordered (i.e. if two methods change the same file the order matters).
> BTW cfruby lacks transactions and parallel execution, we could start
> without.

Sort of yes, sort of no.  What you are describing sounds quite
imperative.  In Guix, if we were to re-deploy a configuration to a
running remote system, we'd do the equivalent of what 'guix system
reconfigure' does now for local GuixSD machines: an atomic update of
the current system (changing the /run/current-system symlink).  'guix
deploy' cannot possibly do a good job of managing non-GuixSD systems
that just happen to run Guix.  I think it would be better to use the
existing configuration management tools for that.

> The first step is to define state by creating 'classes' of machines.
> One class could be deploy 'sshd with IP restriction'. That would be a
> good use case.

Are you proposing that we add a formal concept of "classes" in Guix or
is this just to illustrate an idea?  If the former, I think that pure
functions that return  objects is far more powerful
than a primitive class labeling system.

Hope this helps clarify some things.  Thoughts?

- Dave



Guix in single-user mode (degenerated?)

2015-06-01 Thread Pjotr Prins
Q1: How does single-user degenerated mode actually work? I suppose
'degenerated' refers to the fact that all packages have to be rebuilt
(non-binary install) and/or that it is single threaded? 

Q2: How do I use it?

Q3: Anyone using it right now?

There are situations where I can't get root - mostly on scientific
cluster environments. I don't think it is a priority, but it would be
a good to have if it works.

Pj.



Re: hackage importer

2015-06-01 Thread Federico Beffa
Hi,

sorry for taking so long to answer!

On Sat, May 2, 2015 at 2:48 PM, Ludovic Courtès  wrote:
>> Subject: [PATCH] import: hackage: Refactor parsing code and add new option.
>>
>> * guix/import/cabal.scm: New file.
>>
>> * guix/import/hackage.scm: Update to use the new Cabal parsing module.
>>
>> * tests/hackage.scm: Update tests for private functions.
>>
>> * guix/scripts/import/hackage.scm: Add new '--cabal-environment' option.
>>
>> * doc/guix.texi: ... and document it.
>>
>> * Makefile.am (MODULES): Add 'guix/import/cabal.scm',
>>   'guix/import/hackage.scm' and 'guix/scripts/import/hackage.scm'.
>>   (SCM_TESTS): Add 'tests/hackage.scm'.
>
> No newlines between entries.

Done.

 [...]

> This procedure is intimidating.  I think this is partly due to its
> length, to the big let-values, the long identifiers, the many local
> variables, nested binds, etc.

Ok, this procedure has now ... disappeared ... or rather it is now
hidden in a huge, but invisible macro ;-)
I've added support for braces delimited blocks.  In so doing the
complexity of an ad-hoc solution increased further and decided that it
was time to study (and use) a proper parser.

But, a couple of words on your remarks:

- Thanks to your comment about long list of local variables I
(re-)discovered the (test => expr) form of cond clauses. Very useful!
- The nested use of the >>= function didn't look nice and the reason
is that it is really meant as a way to sequence monadic functions as
in (>>= m f1 f2 ...).  Unfortunately the current version of >>= in
guile only accepts 2 arguments (1 function), hence the nesting.  It
would be nice to correct that :-)

In any case, I had to give up with the state monad because the lalr
parser in Guile doesn't play nice with the functional programming
paradigm.

>> +(define-record-type 
>> +  (make-cabal-package name version license home-page source-repository
>> +  synopsis description
>> +  executables lib test-suites
>> +  flags eval-environment)
>> +  cabal-package?
>> +  (name   cabal-package-name)
>> +  (version cabal-package-version)
>> +  (license cabal-package-license)
>> +  (home-page cabal-package-home-page)
>> +  (source-repository cabal-package-source-repository)
>> +  (synopsis cabal-package-synopsis)
>> +  (description cabal-package-description)
>> +  (executables cabal-package-executables)
>> +  (lib cabal-package-library) ; 'library' is a Scheme keyword
>
> There are no keyboards in Scheme.  :-)

??

> [...]
>
>> +  (define (impl haskell)
>> +(let* ((haskell-implementation (or (assoc-ref env "impl") "ghc"))
>> +   (impl-rx-result-with-version
>> +(string-match "([a-zA-Z0-9_]+)-([0-9.]+)" 
>> haskell-implementation))
>> +   (impl-name (or (and=> impl-rx-result-with-version
>> + (cut match:substring <> 1))
>> +  haskell-implementation))
>> +   (impl-version (and=> impl-rx-result-with-version
>> +(cut match:substring <> 2)))
>> +   (cabal-rx-result-with-version
>> +(string-match "([a-zA-Z0-9_-]+) *([<>=]+) *([0-9.]+) *" 
>> haskell))
>> +   (cabal-rx-result-without-version
>> +(string-match "([a-zA-Z0-9_-]+)" haskell))
>> +   (cabal-impl-name (or (and=> cabal-rx-result-with-version
>> +   (cut match:substring <> 1))
>> +(match:substring
>> + cabal-rx-result-without-version 1)))
>> +   (cabal-impl-version (and=> cabal-rx-result-with-version
>> +  (cut match:substring <> 3)))
>> +   (cabal-impl-operator (and=> cabal-rx-result-with-version
>> +   (cut match:substring <> 2)))
>> +   (comparison (and=> cabal-impl-operator
>> +  (cut string-append "string" <>
>
> Again I feel we need one or more auxiliary procedures and/or data types
> here to simplify this part (fewer local variables), as well as shorter
> identifiers.  WDYT?


I've added two help functions to make it easier to read.

> The existing tests here are fine, but they are more like integration
> tests (they test the whole pipeline.)  Maybe it would be nice to
> directly exercise ‘read-cabal’ and ‘eval-cabal’ individually?

It is true that the tests are for the whole pipeline, but they catch
most of the problems (problems in any function along the chain) with
the smallest number of tests :-). I'm not very keen in doing fine
grained testing. Sorry.

I've removed the test with TABs because the Cabal documentation says
explicitly that they are not allowed.
https://www.haskell.org/cabal/users-guide/developing-packages.html#package-descriptions

I've changed the second test to check the use of braces (multi-line
values have still to be indented).

Regards,
Fede
From f422ea9aff3aa8425c80ea

Re: Guix "ops"

2015-06-01 Thread Pjotr Prins
> There's also unanswered questions like: How should we keep track of
> state?  How do we reconfigure already deployed machines?  How do we shut
> down a deployment and unprovision the resources it used?  Basically, how
> many hooks does the  record type need to cover everything?

The current 'deploy' basically generates a machine from scratch -
there is some similar funtionality in Nix. I read the code and it is
pretty simplistic for now which is possible when you generate a
machine once.

What I would like to have is a state-machine that can be rerun within
an installed deploy in the vein of Puppet/Chef/cfengine but without
the complicated setup of a client/server model (the server 'model'
should simply be a git repository with branches).

That means that after 'deploy' we can run the state and it
checks/updates files that may have been changed. I use something like
that to run my hosts.allow configuration (for example). When I want to
add an IP address, I change the state and rerun 'deploy'. In the past
I used to run cfengine. Later I wrote cfruby/cfenjin which I still run
today for deployment and updates. For me it is time to write something
that plays well with GNU Guix.

Are you envisaging something like that? Something that reruns state
and updates? It is a lot more complicated because you need a way to
define state, modify files, allow for transaction and roll-back.
Ideally the system should execute in parallel (for speed) and be
ordered (i.e. if two methods change the same file the order matters).
BTW cfruby lacks transactions and parallel execution, we could start
without.

The first step is to define state by creating 'classes' of machines.
One class could be deploy 'sshd with IP restriction'. That would be a
good use case. 

Pj.



Re: RFC: building numpy against OpenBLAS.

2015-06-01 Thread Ricardo Wurmus

Ricardo Wurmus writes:

> python-numpy currently depends on Atlas, which means that it cannot be
> substituted with a binary built elsewhere.  OpenBLAS is an alternative
> to Atlas and the binary can be used on all supported CPUs at runtime.
> This makes it possible for us to make numpy substitutable.
>
> We currently do not have a working OpenBLAS on MIPS, so the attached
> patch selects OpenBLAS as an input only when not on MIPS.  Some
> additional configuration happens only unless "atlas" is among the
> inputs.

If there are no objections I'd like to go ahead and push this patch to
build numpy with OpenBLAS.  From the comments I gather that it's not as
controversial a change as I suspected.

~~ Ricardo



[PATCH] Add HTSlib.

2015-06-01 Thread Ricardo Wurmus
>From 708093dc001d46a2ec487873e8150861b05872da Mon Sep 17 00:00:00 2001
From: Ricardo Wurmus 
Date: Mon, 1 Jun 2015 15:06:04 +0200
Subject: [PATCH] gnu: Add HTSlib.

* gnu/packages/bioinformatics.scm (htslib): New variable.
---
 gnu/packages/bioinformatics.scm | 35 +++
 1 file changed, 35 insertions(+)

diff --git a/gnu/packages/bioinformatics.scm b/gnu/packages/bioinformatics.scm
index b9a3412..820811d 100644
--- a/gnu/packages/bioinformatics.scm
+++ b/gnu/packages/bioinformatics.scm
@@ -919,6 +919,41 @@ sequencing (HTS) data.  There are also an number of useful utilities for
 manipulating HTS data.")
 (license license:expat)))
 
+(define-public htslib
+  (package
+(name "htslib")
+(version "1.2.1")
+(source (origin
+  (method url-fetch)
+  (uri (string-append
+"https://github.com/samtools/htslib/releases/download/";
+version "/htslib-" version ".tar.bz2"))
+  (sha256
+   (base32
+"1c32ssscbnjwfw3dra140fq7riarp2x990qxybh34nr1p5r17nxx"
+(build-system gnu-build-system)
+(arguments
+ `(#:phases
+   (modify-phases %standard-phases
+ (add-after
+  'unpack 'patch-tests
+  (lambda _
+(substitute* "test/test.pl"
+  (("/bin/bash") (which "bash")))
+#t)
+(inputs
+ `(("zlib" ,zlib)))
+(native-inputs
+ `(("perl" ,perl)))
+(home-page "http://www.htslib.org";)
+(synopsis "C library for reading/writing high-throughput sequencing data")
+(description
+ "HTSlib is a C library for reading/writing high-throughput sequencing
+data.  It also provides the bgzip, htsfile, and tabix utilities.")
+;; Files under cram/ are released under the modified BSD license;
+;; the rest is released under the Expat license
+(license (list license:expat license:bsd-3
+
 (define-public picard
   (package
 (name "picard")
-- 
2.1.0




Re: PATCH: LibreOffice

2015-06-01 Thread John Darrington
On Sun, May 31, 2015 at 10:32:39PM +0200, Ludovic Court??s wrote:
 Andreas Enge  skribis:
 
 > From what I understood, no. Libreoffice seems to expect the tarball in a
 > special location, and the libreoffice build system takes care of 
unpacking it.
 > But I am not a 100% sure whether the patching we do is needed or not;
 > maybe everything would compile with the unchanged tarball. One of the 
problems
 > is that the patching changes dates and causes an "autoreconf" (or 
similar)
 > after unpacking anyway, so part of the /bin/sh-patching is reverted.
 
 I think we should try hard to not make Libreoffice depend on
 Autoconf/Automake; it???s always a good idea to avoid it, but even more so
 here given the build time and size of the thing.  :-)
 
 Does that seem doable?
 
 Thanks for the great progress on this!
 
Libreoffice is big and messy.  But it works (sort of) and a lot of people like 
it.

Obviously any exercise in Free Software is "doable" (we have the source code!) 
but, my experience in packaging for Guix tells me that it can be very easy to
fall into the trap of making changes to the upstream packages which effectively
forks them.  Then we become the maintainer of a forked package.

J'

 

-- 
PGP Public key ID: 1024D/2DE827B3 
fingerprint = 8797 A26D 0854 2EAB 0285  A290 8A67 719C 2DE8 27B3
See http://sks-keyservers.net or any PGP keyserver for public key.



signature.asc
Description: Digital signature


[PATCH] Attempt to fix build of sra-tools on i686.

2015-06-01 Thread Ricardo Wurmus
Currently sra-tools fails to build on i686 because the custom
configuration system cannot find the "ilib" directory for "ncbi-vdb".
Turns out that "ncbi-vdb" had a broken "install-interfaces" phase on
i686.

The attached patch fixes the phase on i686 and should thus fix the build
of sra-tools on i686.

>From 6d3e22354c767f76711e3f73a2e26d7d7783f57a Mon Sep 17 00:00:00 2001
From: Ricardo Wurmus 
Date: Mon, 1 Jun 2015 11:31:29 +0200
Subject: [PATCH] gnu: ncbi-vdb: Use "i386" instead of "i686" in directory
 name.

* gnu/packages/bioinformatics.scm (ncbi-vdb)[arguments]: Copy libraries from
  "linux/gcc/i386" directory instead of "linux/gcc/i686" when building on
  i686.
---
 gnu/packages/bioinformatics.scm | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/gnu/packages/bioinformatics.scm b/gnu/packages/bioinformatics.scm
index 71055d1..9f6d75a 100644
--- a/gnu/packages/bioinformatics.scm
+++ b/gnu/packages/bioinformatics.scm
@@ -1506,11 +1506,13 @@ simultaneously.")
(assoc-ref inputs "hdf5"))
 (alist-cons-after
  'install 'install-interfaces
- (lambda* (#:key system outputs #:allow-other-keys)
+ (lambda* (#:key outputs #:allow-other-keys)
;; Install interface libraries
(mkdir (string-append (assoc-ref outputs "out") "/ilib"))
(copy-recursively (string-append "build/ncbi-vdb/linux/gcc/"
-(car (string-split system #\-))
+,(system->linux-architecture
+  (or (%current-target-system)
+  (%current-system)))
 "/rel/ilib")
  (string-append (assoc-ref outputs "out")
 "/ilib"))
-- 
2.1.0



Re: Merging ‘HACKING’ in the manual?

2015-06-01 Thread Alex Kost
Ludovic Courtès (2015-05-31 22:20 +0300) wrote:

> Mathieu Lirzin  skribis:
>
>> When reacting, I didn't realize that most of your statement is actually
>> documented in the recent "Running Guix Before It Is Installed" node in
>> doc/guix.texi.
>>
>> Nevertheless, I find it not really relevant to give hacking information
>> in chapter 'Installation'.
>
> Agreed but...
>
>> The attached patch tries to give a better consistency in the location
>> of these useful informations.
>
> ... c71979f just did the opposite move, so no.  :-)
>
>> Even if I find this patch appropriate ;-), my personnal preference would
>> be to delete HACKING, and move all its informations in the chapter
>> 'Contributing' of the Holy Bible (with appropriate refinement of
>> course!) and refer to it in README. Opinions about this?
>
> Yeah, probably.  I’m not completely sure about moving things like patch
> submission and coding style in there; on one hand, it’s not something
> one would expect in a “user manual”, but on the other hand, it’s nice to
> have everything consistently maintained in one place.
>
> Another option would be to have a second .texi document for these things
> (like Findutils, which has a ‘findutils-maint’ document.)
>
> What do people think?

I don't have a strong opinion.  It might be a separate “Hacking” (or
whatever) section in the current manual or another “maint” info manual.
Both solutions look good to me.  I have a slight preference for the
first variant though.

-- 
Alex