[Nix-dev] Linux config options

2015-02-10 Thread Wout Mertens
Just wondering out loud with probably no actionable change:

Why are the kernel options implemented as strings ("FOO y") instead of an
attribute set ({ foo = "y": })?

Of course that means you can easily import your own .config file as
described at https://nixos.org/wiki/How_to_tweak_Linux_kernel_config_options,
but would an attribute set not allow things like "if the kernel has this
feature enabled, install this package" or "if you enable this module the
kernel must have foo set to one of these values"?

Plus, having to convert your own .config file to an attribute set enables
you to only mention the options that you actually care about, thus easily
merging with the default configs in the future.

Thoughts?

Wout.
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Linux config options

2015-02-10 Thread Eelco Dolstra
Hi,

On 10/02/15 14:48, Wout Mertens wrote:

> Just wondering out loud with probably no actionable change:
> 
> Why are the kernel options implemented as strings ("FOO y") instead of an
> attribute set ({ foo = "y": })?
> 
> Of course that means you can easily import your own .config file as described
> at https://nixos.org/wiki/How_to_tweak_Linux_kernel_config_options, but would 
> an
> attribute set not allow things like "if the kernel has this feature enabled,
> install this package" or "if you enable this module the kernel must have foo 
> set
> to one of these values"?

pkgs/os-specific/linux/kernel/manual-config.nix allows passing a "config"
attribute set containing kernel config option, e.g.

 config = { CONFIG_MODULES = "y"; CONFIG_FW_LOADER = "m"; };

I don't know if that's exposed to NixOS modules though.

-- 
Eelco Dolstra | LogicBlox, Inc. | http://nixos.org/~eelco/
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Linux config options

2015-02-10 Thread Matthias Beyer
On 10-02-2015 14:56:55, Eelco Dolstra wrote:
> Hi,
> 
> On 10/02/15 14:48, Wout Mertens wrote:
> 
> > Just wondering out loud with probably no actionable change:
> > 
> > Why are the kernel options implemented as strings ("FOO y") instead of an
> > attribute set ({ foo = "y": })?
> > 
> > Of course that means you can easily import your own .config file as 
> > described
> > at https://nixos.org/wiki/How_to_tweak_Linux_kernel_config_options, but 
> > would an
> > attribute set not allow things like "if the kernel has this feature enabled,
> > install this package" or "if you enable this module the kernel must have 
> > foo set
> > to one of these values"?
> 
> pkgs/os-specific/linux/kernel/manual-config.nix allows passing a "config"
> attribute set containing kernel config option, e.g.
> 
>  config = { CONFIG_MODULES = "y"; CONFIG_FW_LOADER = "m"; };

Uh, nice to know! Maybe there should be an option to tell the builder
to include all "m" as "y" (Don't use modules but include the stuff
directly in the kernel).

Would be nice for own kernels, I guess.

---

Besides: Is there a documentation around how to build my own kernel
(and install it, of course) as system package? Maybe even from a local
clone of the kernel tree?

-- 
Mit freundlichen Grüßen,
Kind regards,
Matthias Beyer

Proudly sent with mutt.
Happily signed with gnupg.


pgpFbAP9tek07.pgp
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Use Haskell for Shell Scripting

2015-02-10 Thread Ertugrul Söylemez
> I think you are thinking too big-system-design instead of
> quick-scripting about that.

Let's say I don't draw a line between small systems and large systems
when it comes to abstractions, because algebraic abstractions are cheap.

IMO a minimum requirement for a language to be called "functional" is
that functions are cheap.  I mean cheap as in "almost free", just like
mutable variables are almost free in imperative programming.
Quantification ("generics" in most languages) must be entirely free.
It's not in C++, but in boxing languages it is.

Now if quantification is free and your functions are almost free, then
algebraic abstractions are also almost free.  You might as well use them
everywhere, even in your quick-n-dirty scripts, because they make your
code shorter, more maintainable and usually more readable as well
(especially the types).

While you're reading this you should not think of OOP.  OO abstractions
are expensive in every way.  They make your program slower and your
source code longer.  They are not suitable for quick-n-dirty scripts.


> I have added an external module to StumpWM that goes completely
> against the default logic of connecting screens, workspaces and
> windows.
>
> It just moves windows between the workspaces when it needs to.
>
> I still use the hard parts of a WM (X protocol handling) and just
> added my own implementation of the fun parts.

Yeah, that sounds pretty much like what I do right now, except with a
different WM at the core.


> I think I was not clear with my idea about Ratpoison. I meant the Unix
> way of doing things: there is Ratpoison that can easily receive
> commands to change the splits and move the windows. Your logic is in
> Haskell and it just sends commands to Ratpoison (which has been
> debugged once).

The Unix philosophy can manifest in a number of ways.  Usually you make
programs pipe-friendly and perhaps add a configuration file.  That's the
traditional approach, which has one huge drawback:  When communicating
all structure must be flattened, because pipes understand only the
"stream of bytes" notion.  Then at the receiving end the structure must
be recovered from the byte-stream.  For long-running programs like a WM
it also means that you need to act non-blockingly and handle errors
sensibly.

An interesting alternative is that you don't make your program a
"program" in the first place.  After all what is a "program"?  It's a
library together with a `main` procedure.  Just leave out `main`.  Now
it's just a library and "configuring" amounts to the user writing an
actual program that uses the library.  That way there is no need to go
through the flatten-and-recover process from above.  You can stay within
the language's type system.  That's what most Haskell programs do,
including xmonad.

So I'm probably already doing what you're suggesting, except that it
doesn't look like Unix, because there is no "pipe" involved. =)


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Unstable Nixpkgs on stable NixOS (was: Automatically locking the screen with xautolock)

2015-02-10 Thread Wout Mertens
Your setting NIX_PATH to `nixpkgs=~/.nix-defexpr/channels/nixpkgs` makes a
lot of sense to me and I wouldn't mind it being the default for users...

In general though the whole home dir setup could use some love. There's
.nix-defexpr/, .nix-channels, .nix-profile@ and .nixpkgs/config.nix, where
it would be a lot cleaner to have everything under ~/.nix/ and on OS X
those should really be under ~/Library/Nix/.

Even nicer would be if everything (except the profile) defaulted to the
system-wide versions until the user takes an action like choosing a channel
or running nix-env. Then the user should get some warnings about getting
user-stored versions and everything gets set up automatically. Basically
making
https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/programs/shell.nix#L23-L62
lazy.

Wout.

On Tue Feb 10 2015 at 4:18:06 AM Michael Alyn Miller 
wrote:

> The 02/08/2015 12:21, Jeffrey David Johnson wrote:
> > Nice! I'm currently binding "gksu 'i3lock -c00 &
> > pm-suspend'" to a hotkey using i3, but this is better.  Is
> > there a standard way to get it in my fork of nixpkgs, which is
> > tracking the release-14.12 branch?  (I could just copy and
> > paste but if there's a better way now would be a good time to
> > learn.)
>
> I certainly don't have an authoritative answer to this question,
> but I'll tell you what I am doing and then maybe someone with
> much more NixOS/Nixpkgs experience will chime in with
> tweaks/guidance.
>
> In general I have the following goals for my NixOS system:
>
> 1. I want everything to be reproducible, which of course means that my
>system is configured in /etc/nixos/configuration.nix, but also that
>any packages that I install as an individual user are reproducible as
>well.
> 2. Related to the above, I only want to put "system" packages into
>configuration.nix.  My theory here is that most of my package
>management is going to be done as a regular user, by creating
>project-specific nix-shell environments, etc.  Also, `nix-env -q` is
>a nice way to see everything that is installed and it doesn't appear
>that similar tools exist at the NixOS level.
>
> Goal #1 means that configuration.nix is pretty basic -- the bare minimum
> number of packages to get my system working and launch an X environment
> (the window manager, for example, but no apps).
>
> Goal #2 was more complex.  I found the following two references on this
> topic:
>
> -  with_nix-env_like_with_configuration.nix.3F>
> - 
>
> I went with the first option because I wanted to be able to use `nix-env
> -q` to see what I had installed, regardless of how I did the install
> (manually or through the "myPackageSelections.nix" file).  I can provide
> more details here if you like.
>
> With that choice out of the way, the next problem that I ran into was
> getting access to newer packages.  I have contributed a couple of
> packages to Nixpkgs, fixed some others, etc. and so far I had just been
> manually installing them out of my nixpkgs checkout.  This was not
> entirely compatible with my first goal, because now I had installed
> packages that were not listed in my configuration.
>
> Enter the nixpkgs-unstable channel!  That channel seems to be updated
> pretty frequently and means that changes are available in binary form
> within a few days.  Thankfully my second goal meant that I already had a
> perfect split between stable and unstable: my NixOS configuration could
> stay on 14.12 and my user-level Nixpkgs was free to go unstable.
>
> Getting this working was relatively straightforward, although I had to
> modify my environment in a couple of unexpected ways (more on this
> later).
>
> The simple part is the `nix-channel --add https://nixos.org/channels/
> nixpkgs-unstable` 
> and `nix-channel --update` as a normal user.  Some things worked after
> that, but not everything.  `nix-env -i whatever` will happily use the
> nixpkgs-unstable channel because it finds that channel in ~/.nix-defexpr.
> Adding in the -f flag to specify your own expression means that nix-env
> no longer knows about the channel though.  Instead, nix-env falls back
> to NIX_PATH, which by default points at the stable NixOS channel and
> therefore has no idea that you want to use nixpkgs-unstable.
>
> I found that very confusing, although that's probably because of the
> subtlety that is the myPackageSelections.nix-based way of managing my
> package list.  In any event, I had to dig through the docs to understand
> the situation.
>
> I fixed this problem by setting NIX_PATH to `nixpkgs=~/.nix-defexpr/
> channels/nixpkgs`
> and removing the channels_root symlink from ~/nix-defexpr.  That last
> step probably isn't necessary, but every now and then nix-env would
> complain about duplicate packages and besides that I don't want to ge

Re: [Nix-dev] Linux config options

2015-02-10 Thread Shea Levy
For my own use cases at least, I’ve found that when the generic config is not 
good enough it’s better to just generate a config on my own the traditional way 
(via make nconfig or similar) and pass it to manual-config.

> On Feb 10, 2015, at 1:56 PM, Matthias Beyer  wrote:
> 
> On 10-02-2015 14:56:55, Eelco Dolstra wrote:
>> Hi,
>> 
>> On 10/02/15 14:48, Wout Mertens wrote:
>> 
>>> Just wondering out loud with probably no actionable change:
>>> 
>>> Why are the kernel options implemented as strings ("FOO y") instead of an
>>> attribute set ({ foo = "y": })?
>>> 
>>> Of course that means you can easily import your own .config file as 
>>> described
>>> at https://nixos.org/wiki/How_to_tweak_Linux_kernel_config_options, but 
>>> would an
>>> attribute set not allow things like "if the kernel has this feature enabled,
>>> install this package" or "if you enable this module the kernel must have 
>>> foo set
>>> to one of these values"?
>> 
>> pkgs/os-specific/linux/kernel/manual-config.nix allows passing a "config"
>> attribute set containing kernel config option, e.g.
>> 
>> config = { CONFIG_MODULES = "y"; CONFIG_FW_LOADER = "m"; };
> 
> Uh, nice to know! Maybe there should be an option to tell the builder
> to include all "m" as "y" (Don't use modules but include the stuff
> directly in the kernel).
> 
> Would be nice for own kernels, I guess.
> 
> ---
> 
> Besides: Is there a documentation around how to build my own kernel
> (and install it, of course) as system package? Maybe even from a local
> clone of the kernel tree?
> 
> -- 
> Mit freundlichen Grüßen,
> Kind regards,
> Matthias Beyer
> 
> Proudly sent with mutt.
> Happily signed with gnupg.
> ___
> nix-dev mailing list
> nix-dev@lists.science.uu.nl
> http://lists.science.uu.nl/mailman/listinfo/nix-dev

___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] A few questions about ARM support and NixOS on a Chromebook

2015-02-10 Thread Wout Mertens
There's another option : build natively with distcc pointing to
cross-compilers on x86 boxes. All the configuration etc happens natively
and the compiles themselves are sped up. I wonder if that's the approach
Vladimír Čunát took earlier? He made the notes on how to set up distcc for
raspberry pi on the wiki iirc.

Yet another option : run the above option in qemu, running the whole thing on
an x86 box with the heaviest lifting done natively.

These only speed up C/C++ of course.

Wout.

On Mon, Feb 9, 2015, 5:01 PM Harald van Dijk  wrote:

> On 09/02/2015 15:57, James Haigh wrote:
>
> On 09/02/15 14:16, Harald van Dijk wrote:
>
> On 09/02/2015 14:55, James Haigh wrote:
>
> On 28/01/15 07:42, Luke Clifton wrote:
>
> Hi Bjørn,
>
>  I have read that thread. I agree with you 100% that native builds (on
> real or virtual hardware) is the only way this can work. Upstream doesn't
> usually care if their software can cross compile, and they can't maintain
> it themselves even if they did. Sometimes it isn't even an option, e.g. GHC
> still can't cross compile template Haskell yet.
>
> I don't understand why cross-compilation is even a thing, other than
> decades of false assumptions being baked into compilers.
> As I understand, if a compiler (and by ‘compiler’ I'm referring to the
> whole toolchain required for compilation) is taking source code and
> compilation options as input, and giving object code for the specified
> platform as output, it is called ‘cross-compiling’ if the specified target
> platform is different to the platform that the compiler is running on. If
> GCC is running on ARM, compiling code ‘natively’ to ARM successfully, it is
> counterintuitive that it would fail to build for ARM if GCC is running on
> x86. And vice versa. A compiler should produce object code for a target
> platform that implements the source code – it may not have the same
> efficiency as the output of other compilers (or with other compilation
> options), but should have the same correctness when execution completes. If
> the source code being compiled is a specific version of the GCC source code
> itself, and it is compiled for both x86 and ARM, then if the compilation is
> computationally correct, both compilations of GCC should produce programs
> that, although will compute in a different way and with different
> efficiency, should give the exact same object code when given the same
> source code and parameters. So if the target platform parameter is ARM,
> they should both build exactly the same ARM machine code program.
>
> All of this is true, but the toolchain usually doesn't have any problems
> with cross-compilations.
>
> However, evidently this is not the case unfortunately. So the
> compilers or their toolchains are, in essence, receiving the platform that
> they are running on as ‘input’ to the build, and making assumptions that
> this build platform has something to do with the target platform. I.e. they
> are _aware_ of the platform that they're building on, whereas
> theoretically, they shouldn't be. Apparently this has a lot to do with
> configure scripts.
>
> The configure scripts, or similar machinery in non-autoconf packages, are
> part of the package, not part of the toolchain. Many programs use runtime
> checks in configure scripts. A trivial example that hopefully doesn't exist
> in any real package:
>
> If a package only compiles on platforms where sizeof(int) == 4, or where
> special code is needed on platforms where sizeof(int) != 4, might try to
> detect those platforms by compiling and linking
>
> int main(void) {
>   return sizeof(int) != 4;
> }
>
> and then executing it. If the execution succeeds (i.e. returns zero), then
> sizeof(int) == 4. If the execution doesn't succeed, then the configure
> script assumes that sizeof(int) != 4, even though it's very well possible
> that the only reason that execution fails is that the generated executable
> is for a different platform.
>
> Other examples are build environments that build generator tools at
> compile time, and run them to produce the source files to compile at run
> time. The generator tool must be compiled with the build compiler, not with
> the host compiler, or execution will fail when cross-compiling. Still, many
> packages build such tools with the host compiler anyway, because upstream
> only tests native compilations. This too is not an issue with the toolchain.
>
> But what I'm saying is that if the package succeeds in compiling natively
> but fails to cross-compile, then this is an issue with the
> compiler/toolchain. Yes it can be solved by writing configure scripts that
> support cross-compiling, but really, the compiler toolchain should isolate
> this such that the compilation is deterministic regardless of build
> platform.
> In your example, I'm saying that it should be the job of the compiler
> toolchain to ensure that ‘sizeof(int) == 4’ gives the correct result for
> the target platform. If the only feasible wa

Re: [Nix-dev] A few questions about ARM support and NixOS on a Chromebook

2015-02-10 Thread Lluís Batlle i Rossell
I used this with the raspberry pi, and I made notes in the wiki at that time.

On Tue, Feb 10, 2015 at 03:14:10PM +, Wout Mertens wrote:
> There's another option : build natively with distcc pointing to
> cross-compilers on x86 boxes. All the configuration etc happens natively
> and the compiles themselves are sped up. I wonder if that's the approach
> Vladimír Čunát took earlier? He made the notes on how to set up distcc for
> raspberry pi on the wiki iirc.
> 
> Yet another option : run the above option in qemu, running the whole thing on
> an x86 box with the heaviest lifting done natively.
> 
> These only speed up C/C++ of course.
> 
> Wout.
> 
> On Mon, Feb 9, 2015, 5:01 PM Harald van Dijk  wrote:
> 
> > On 09/02/2015 15:57, James Haigh wrote:
> >
> > On 09/02/15 14:16, Harald van Dijk wrote:
> >
> > On 09/02/2015 14:55, James Haigh wrote:
> >
> > On 28/01/15 07:42, Luke Clifton wrote:
> >
> > Hi Bjørn,
> >
> >  I have read that thread. I agree with you 100% that native builds (on
> > real or virtual hardware) is the only way this can work. Upstream doesn't
> > usually care if their software can cross compile, and they can't maintain
> > it themselves even if they did. Sometimes it isn't even an option, e.g. GHC
> > still can't cross compile template Haskell yet.
> >
> > I don't understand why cross-compilation is even a thing, other than
> > decades of false assumptions being baked into compilers.
> > As I understand, if a compiler (and by ‘compiler’ I'm referring to the
> > whole toolchain required for compilation) is taking source code and
> > compilation options as input, and giving object code for the specified
> > platform as output, it is called ‘cross-compiling’ if the specified target
> > platform is different to the platform that the compiler is running on. If
> > GCC is running on ARM, compiling code ‘natively’ to ARM successfully, it is
> > counterintuitive that it would fail to build for ARM if GCC is running on
> > x86. And vice versa. A compiler should produce object code for a target
> > platform that implements the source code – it may not have the same
> > efficiency as the output of other compilers (or with other compilation
> > options), but should have the same correctness when execution completes. If
> > the source code being compiled is a specific version of the GCC source code
> > itself, and it is compiled for both x86 and ARM, then if the compilation is
> > computationally correct, both compilations of GCC should produce programs
> > that, although will compute in a different way and with different
> > efficiency, should give the exact same object code when given the same
> > source code and parameters. So if the target platform parameter is ARM,
> > they should both build exactly the same ARM machine code program.
> >
> > All of this is true, but the toolchain usually doesn't have any problems
> > with cross-compilations.
> >
> > However, evidently this is not the case unfortunately. So the
> > compilers or their toolchains are, in essence, receiving the platform that
> > they are running on as ‘input’ to the build, and making assumptions that
> > this build platform has something to do with the target platform. I.e. they
> > are _aware_ of the platform that they're building on, whereas
> > theoretically, they shouldn't be. Apparently this has a lot to do with
> > configure scripts.
> >
> > The configure scripts, or similar machinery in non-autoconf packages, are
> > part of the package, not part of the toolchain. Many programs use runtime
> > checks in configure scripts. A trivial example that hopefully doesn't exist
> > in any real package:
> >
> > If a package only compiles on platforms where sizeof(int) == 4, or where
> > special code is needed on platforms where sizeof(int) != 4, might try to
> > detect those platforms by compiling and linking
> >
> > int main(void) {
> >   return sizeof(int) != 4;
> > }
> >
> > and then executing it. If the execution succeeds (i.e. returns zero), then
> > sizeof(int) == 4. If the execution doesn't succeed, then the configure
> > script assumes that sizeof(int) != 4, even though it's very well possible
> > that the only reason that execution fails is that the generated executable
> > is for a different platform.
> >
> > Other examples are build environments that build generator tools at
> > compile time, and run them to produce the source files to compile at run
> > time. The generator tool must be compiled with the build compiler, not with
> > the host compiler, or execution will fail when cross-compiling. Still, many
> > packages build such tools with the host compiler anyway, because upstream
> > only tests native compilations. This too is not an issue with the toolchain.
> >
> > But what I'm saying is that if the package succeeds in compiling natively
> > but fails to cross-compile, then this is an issue with the
> > compiler/toolchain. Yes it can be solved by writing configure scripts that
> > support cross-compiling, bu

Re: [Nix-dev] UI management (Was: Use Haskell for Shell Scripting)

2015-02-10 Thread Ertugrul Söylemez
>> It's not that esoteric.  Think about the average 2013 laptop or PC
>> with plenty of RAM.  When you're done with a certain task, you close
>> its window, simply because you're used to that and perhaps because
>> you draw a relationship to the physical world, where you prefer your
>> desk to be clean and organised.
>
> Erm, some of the things need to know they should stop eating CPU
> cycles and stop touching their esident working set — I do think that
> my 8 GiBs of RAM are sometimes better used with more disk caching of
> my active-in- brain tasks and not caching LibreOffice and
> Firefox. Sometimes not.

That's actually fairly easy to handle on Linux by using namespaces,
sigstopping and sigconting.  There is probably also a way to force
programs to go to swap by using cgroups.  Be patient, I'm still working
on a DSL for this design. =)

Unfortunately it entails removing systemd from the top of the process
tree, which is a lot of work for a single person, but I'm highly
motivated.  By the way, this is not my usual systemd hatred.  There are
technical reasons to remove it from the top.


>> So what's the solution?  Simple: Workspaces must be cheap, dynamic
>> and extremely easy to manage.  There should not be a rigid mapping
>> between workspaces and windows.  Windows should easily be able to
>> belong to multiple workspaces.  A generalised xmonad could do it, but
>> the current one can't.
>
> My experience says that moving towards your goal is a good idea w.r.t.
> status quo, but halfway there you'll switch the direction.

I'm not sure why.  This is not about a certain UX concept, but just
about a window manager being sufficiently general that I could start to
experiment.


> What is your vision and what is your experience on your way there?

Wow, big question!  I'll try: It's twofold.

A long time ago (≥ 13 years), little me was using Windows, and he always
thought that there is something wrong with this concept of "windows on a
desktop".  "Folders" felt unnatural, too.  Everything about my operating
system tried to mimic the physical world.  I never really cared about
learning curves, so I always asked myself what it would look like if we
could use the potential of a virtual space to its full potential instead
of locking ourselves up within the boundary of our own imagination
("office work" means "sitting in front of a desk" to most people).

A later me had already switched to Linux and discovered workspaces.  It
was a minor improvement, because this time you have multiple virtual
desks, so you had no reason to drop something on the floor (minimising).
But it still felt wrong, and this feeling continues to this day.

My vision for UX is that we move away from the physical world.  In the
computer we can construct our own universe with the rules of physics
constructed or bent to support the work we are doing, perhaps in a fun
way.  Let's not think of the space on our screen as a virtual desk.
It's a projection of a space that we programmers create, and we have
complete freedom to create whatever we want!  In the same way I always
thought that our filesystem concept is wrong.  There is no inherent
reason to have "files in folders".  I believe that a disk should act
more like a giant mutable variable.

My vision for software in general is what I call "organic software".
I'm actually writing an article about it, so let me just quote a few
paragraphs of the section *Moronware* from that one:

"View programs, especially interactive ones, as part of your team.
Imagine that you have handwritten a bunch of documents that you would
like me, your teammate, to review.  So you hand me a copy and I start
reading.  Later that day you return to my office, and I'm just sitting
there, doing nothing.  You ask what's wrong, and I point to one of the
words in the document I were reading.  I simply couldn't decipher your
handwriting, so I stopped and waited for your assistance instead of
making a note and continuing.  Seems absurd, right?  No good teammate
would do that.  That's true.  Our programs are probably not such good
teammates.

When humans act the way our programs do we like to call them morons.  We
should do that with programs instead of humans.  They aren't self-aware
beings (yet), so they won't be offended, and they really do act like
morons.  I propose the term *moronware*.

How can we write software that isn't moronware?  Good design and proper
responses to exceptional circumstances can make our programs faster,
more responsive, more flexible and less like untrained animals.  They
make our programs *dependable* and in the end *trustworthy*, because we
can feed tasks to our program and focus on other things or even just
pick up a cup of coffee, because we *know* that it will do its best to
complete as much as possible without our intervention."


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/m

Re: [Nix-dev] Use Haskell for Shell Scripting

2015-02-10 Thread Ertugrul Söylemez
> Data type declarations are not free in any case, I think.

Compared to what?  Algebraic abstractions usually compile to exactly the
code you would have written if you had not used abstraction.


> Well, you started talking like you were considering some limitation of
> XMonad hard to work around.

I am.  I'm using a tree-shaped set of workspaces, but I need to encode
this tree within the names of the workspaces, which is incredibly
awkward.


>> So I'm probably already doing what you're suggesting, except that it
>> doesn't look like Unix, because there is no "pipe" involved. =)
>
> Well, with sockets it is easier to do decoupling, and cross-language
> development.

Decoupling seems unrelated, but the cross-language point is valid of
course.  Also sockets can work over a network.  So yeah, it's a
tradeoff.  For my window manager I really don't want to go through that
complexity.  It's written in Haskell, I'd like to configure in Haskell
and I'm not planning to offload my WM state to another machine.  So I'm
glad I can just use xmonad as a library.


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


[Nix-dev] strongSwan problem

2015-02-10 Thread Bas van Dijk
I was wondering if somebody here has experience with strongSwan on NixOS.

I'm having a problem connecting to a gateway as I described in the
following mail to the strongSwan mailinglist:

https://lists.strongswan.org/pipermail/users/2015-February/007422.html

Cheers,

Bas
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Use Haskell for Shell Scripting

2015-02-10 Thread Ertugrul Söylemez
>>> Data type declarations are not free in any case, I think.
>>
>> Compared to what?  Algebraic abstractions usually compile to exactly
>> the code you would have written if you had not used abstraction.
>
> Data type declarations have to be written. Store-everything-in-hash-
> tables is slower but quicker to write as long as I can keep the entire
> program structure in my head.
>
> Also I am trying to learn to write reasonable Haskell code for a small
> data-crunching script without just "writing Fortran into Haskell";
> well, many of its features are minor annoyances in that mode even if
> they are useful for giving structure to a large program.

It's seldom that you have to write your own data types, if you don't
want to.  Basic types, functions, products and coproducts can express
anything you want that isn't a tightly packed array of machine words.

But if you really want to dump everything into a table-like data
structure, you can use Data.Map or Data.IntMap for everything.  Together
with GHC's OverloadedLists extension this should make your code just as
short as the usual everything-is-a-hash-table in most scripting
languages.

However, you may want to write type signatures anyway.  It doesn't
increase your development time considerably.


>>> Well, you started talking like you were considering some limitation
>>> of XMonad hard to work around.
>>
>> I am.  I'm using a tree-shaped set of workspaces, but I need to
>> encode this tree within the names of the workspaces, which is
>> incredibly awkward.
>
> Well, I think you should be able just to write alternative UI for
> workspace selection simply ignoring the old one, no?

Unfortunately not.  Everything in xmonad goes through the core data
structure StackSet.


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Use Haskell for Shell Scripting

2015-02-10 Thread Ertugrul Söylemez
>> It's seldom that you have to write your own data types, if you don't
>> want to.  Basic types, functions, products and coproducts can express
>> anything you want that isn't a tightly packed array of machine words.
>>
>> But if you really want to dump everything into a table-like data
>> structure, you can use Data.Map or Data.IntMap for everything.
>> Together with GHC's OverloadedLists extension this should make your
>> code just as short as the usual everything-is-a-hash-table in most
>> scripting languages.
>
> Actually, I use a ton of tuples-as-records in my crunching code in
> Common Lisp or Julia.
>
> Some of the shell tricks based on expansions are portable to Lisp, not
> worth it in Julia and definitely too costly in Haskell (learning
> Template Haskell is definitely outside my plans).

I don't really know TH either.  Occasionally I use TH actions defined in
a library (for example to derive safecopy instances or, less commonly,
to auto-generate lenses).  But TH somehow feels wrong and ugly.


>> However, you may want to write type signatures anyway.  It doesn't
>> increase your development time considerably.
>
> I also need to write matching to extract data from deep structures,
> no?

I'm not sure what you mean by that.  Perhaps you can drop me an off-list
mail, so we can talk about your specific application.


 I'm using a tree-shaped set of workspaces, but I need to encode
 this tree within the names of the workspaces, which is incredibly
 awkward.
>>>
>>> Well, I think you should be able just to write alternative UI for
>>> workspace selection simply ignoring the old one, no?
>>
>> Unfortunately not.  Everything in xmonad goes through the core data
>> structure StackSet.
>
> Why is it a problem? For hierarchically structured workspaces you just
> tell XMonad core to select the low-level workspace.

The trouble is that xmonad's idea of "set of workspaces" is encoded as a
list zipper.  It does not allow me to encode the tree.  It only ever
holds *lists* of workspaces and does not abstract over this data
structure.


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] UI management (Was: Use Haskell for Shell Scripting)

2015-02-10 Thread Ertugrul Söylemez
> Also, I sometimes tend to use closing unneeded windows as a way of
> keeping track of short-term TODO list.

I think there doesn't have to be semantic difference between closing a
window and dropping it into a virtual drawer.  Your short-term to-dos
would be elsewhere.


>> Unfortunately it entails removing systemd from the top of the process
>> tree, which is a lot of work for a single person, but I'm highly
>> motivated.  By the way, this is not my usual systemd hatred.  There
>> are technical reasons to remove it from the top.
>
> Will you share it already? I do have a bootable systemd-less system
> based on NixPkgs, and would probably contribute some service
> definitions to the thing you develop.

I'm very close to a bootable system.  Allow me to arrive there first,
then we can compare and exchange ideas and code.  I will definitely let
you know, because I'm interested in your way of doing it.

Publishing a working prototype works better for me.


>> I'm not sure why.  This is not about a certain UX concept, but just
>> about a window manager being sufficiently general that I could start
>> to experiment.
>
> Well, because what you said in that paragraph is close to what I tried
> to do at some time, and there are hidden costs. When they become
> revealed (and they are different for different people, of course), you
> will take the things you have by this time and change the direction of
> development according to new data.

Oh, that might happen of course.


>> [...] In the same way I always thought that our filesystem concept is
>> wrong.  There is no inherent reason to have "files in folders".  I
>> believe that a disk should act more like a giant mutable variable.
>
> At some point I saw RelFS and tried to use it for tagging files. Then
> I hit its limitations and written my own QueryFS. Now I use it for
> many things, but not for file tagging because hierarchical structure
> made it simpler for me just to think in terms of hierarchy.
>
> So directory structure was a good idea for file storage, in my
> opinion.

Sure, it is great for some things, but terrible for others.  The point
is that we don't get a choice.  Everything is designed around the "files
in directories" notion.  That's why an sqlite database is completely
opaque from the point of view of the operating system.  You need special
tools with a special UI to inspect databases.


>> My vision for software in general is what I call "organic software".
>> [...]
>> How can we write software that isn't moronware?  Good design and
>> proper responses to exceptional circumstances can make our programs
>> faster, more responsive, more flexible and less like untrained
>> animals.  They make our programs *dependable* and in the end
>> *trustworthy*, because we can feed tasks to our program and focus on
>> other things or even just pick up a cup of coffee, because we *know*
>> that it will do its best to complete as much as possible without our
>> intervention."
>
> Well, this vision is what I consider one of the problems of modern
> software development. It would be yet another tangent, though; do you
> want to go in that direction?
>
> In short, I want (but don't yet know how to achieve in some cases)
> predictable transparent tools usable as IA in Engelbart speak, not
> DWIM helpers with wannabe-AI.
>
> I want tools that can be easily transparently wrapped to try to do as
> much as possible or to stop at the first problem.

I find this topic very interesting.  I believe it needs to be discussed,
but this mailing list is probably not the right place to do it.  For
lack of a better place, feel free to write me off-list, if you would
like to discuss this further.


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Use Haskell for Shell Scripting

2015-02-10 Thread Ertugrul Söylemez
>>> Some of the shell tricks based on expansions are portable to Lisp,
>>> not worth it in Julia and definitely too costly in Haskell (learning
>>> Template Haskell is definitely outside my plans).
>>
>> I don't really know TH either.  Occasionally I use TH actions defined
>> in a library (for example to derive safecopy instances or, less
>> commonly, to auto-generate lenses).  But TH somehow feels wrong and
>> ugly.
>
> In both Julia and Common Lisp I use macros for many tasks and they
> make life much more comfortable. Of course, Haskell type system may
> make it hasrder to use macros.

Oh, Common Lisp (CL) macros don't correspond to TH, but rather to
regular functions in Haskell.  We have first class actions together with
lazy evaluation.  What is "code is data is code" in CL is "actions are
first class values" in Haskell.

You only need TH when you need to generate something that is not first
class, for example class instances.


 However, you may want to write type signatures anyway.  It doesn't
 increase your development time considerably.
>>>
>>> I also need to write matching to extract data from deep structures,
>>> no?
>>
>> I'm not sure what you mean by that.  Perhaps you can drop me an
>> off-list mail, so we can talk about your specific application.
>
> Well, it looks like field names are not scoped and if I use plain ADT
> I have to write pattern matching to extract data from a member of a
> member of a structure.

You can chain both functions and lenses.  Extraction via functions:

(innerField . outerField) value

Extraction via lenses:

value ^. outerField . innerField

Update via lenses:

(outerField . innerField .~ 15) value

You can even get completely imperative by using a state monad:

outerField . innerField .= 15
outerField . otherInnerField .= "blah"


>> The trouble is that xmonad's idea of "set of workspaces" is encoded
>> as a list zipper.  It does not allow me to encode the tree.  It only
>> ever holds *lists* of workspaces and does not abstract over this data
>> structure.
>
> So? Keep your own structure and keep pointers to XMonad-stored
> entries.

Yes, I do that, arguably not in the most elegant way possible, because
the predefined workspace switchers and layout algorithms need to
understand my concept as well.

If xmonad would simply abstract over StackSet, it would be very easy and
transparent to do.


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Announcing New Ruby Support

2015-02-10 Thread Cillian de Róiste
Hi!

I thought I'd try to package jekyll, but I've run into some trouble...

2015-01-22 5:29 GMT+01:00 Charles Strahan :

> To use the new system, first create (or copy over) a Gemfile describing the
> required gems.
>

I grabbed the Gemfile from the github repo:
$ wget https://raw.githubusercontent.com/jekyll/jekyll/master/Gemfile

Next, create a Gemfile.lock by running `bundler package --no-install` in the
> containing directory (this also creates two folders - vendor and .bundle -
> which you'll want to delete before committing).
>

I had first installed pkgs.bundler, which got me version 1.7.9, which
doesn't have the --no-install option which seems to have been added in 1.8.
I then installed pkgs.bundler_HEAD which also reports the version to be
1.7.9 but does have the --no-install option. However, I got stuck on the
next step:

$ bundler package --no-install
There are no gemspecs at /home/goibhniu/ruby/jekyll.

I'm afraid I know very little about ruby or gems etc., so any help/pointers
would be appreciated.

Thanks!
Cillian


Now, you'll need to dump the lockfile as a Nix expression. To do so, use
> Bundix
> in the target directory:
>
>   $(nix-build '' -A bundix)/bin/bundix
>
> That will drop a gemset.nix file in your current directory, which
> describes the
> sources for all of the gems and their respective SHAs.
>
> Finally, you'll need to use bundlerEnv to build the gems. The following
> example
> is how we package the sup mail reader:
>
>   { stdenv, lib, bundlerEnv, gpgme, ruby, ncurses, writeText, zlib, xapian
>   , pkgconfig, which }:
>
>   bundlerEnv {
> name = "sup-0.20.0";
>
> inherit ruby;
> gemfile = ./Gemfile;
> lockfile = ./Gemfile.lock;
> gemset = ./gemset.nix;
>
> # This is implicit:
> #
> #   gemConfig = defaultGemConfig;
> #
> # You could just as well do the following:
> #
> #   gemConfig.ncursesw = spec: {
> # buildInputs = [ ncurses ];
> # buildFlags = [
> #   "--with-cflags=-I${ncurses}/include"
> #   "--with-ldflags=-L${ncurses}/lib"
> # ];
> #   };
> #
> # Where `spec` is the attributes of the corresponding gem,
> # in case you wanted to make something predicated on version,
> # for example.
> #
> # See default-gem-config.nix for more examples.
>
> meta = with lib; {
>   description = "A curses threads-with-tags style email client";
>   homepage= http://supmua.org;
>   license = with licenses; gpl2;
>   maintainers = with maintainers; [ cstrahan lovek323 ];
>   platforms   = platforms.unix;
> };
>   }
>
> And that's all there is to it!
>
> Enjoy,
>
> -Charles
>
>
> ___
> nix-dev mailing list
> nix-dev@lists.science.uu.nl
> http://lists.science.uu.nl/mailman/listinfo/nix-dev
>
>


-- 
NixOS: The Purely Functional Linux Distribution
http://nixos.org
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


[Nix-dev] Missing GeoIP databases

2015-02-10 Thread Christoph-Simon Senjak
Hello.

I installed geoip, and GEOIPLOOKUP(1) says the databases should be in 
/nix/store/b952llxwhpd8046r40xkkkjgg1vmcw7q-geoip-1.6.2/share/GeoIP but 
... they are not. Is this intentional or is this a bug?

Best regards
Christoph-Simon Senjak
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Missing GeoIP databases

2015-02-10 Thread Vladimír Čunát

On 02/10/2015 07:50 PM, Christoph-Simon Senjak wrote:

Is this intentional or is this a bug?


I don't know - I don't see any attempt to put the data in there.
I just noticed that ntopng does include some geoip data.
(I've never used any of the above.)


Vladimir




smime.p7s
Description: S/MIME Cryptographic Signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Missing GeoIP databases

2015-02-10 Thread Christoph-Simon Senjak
On 10.02.2015 20:01, Vladimír Čunát wrote:
> On 02/10/2015 07:50 PM, Christoph-Simon Senjak wrote:
>> Is this intentional or is this a bug?
>
> I don't know - I don't see any attempt to put the data in there.
> I just noticed that ntopng does include some geoip data.
> (I've never used any of the above.)

In other Distros, the database is mostly either included, or there is an 
extra package.

CSS
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Missing GeoIP databases

2015-02-10 Thread Eelco Dolstra
Hi,

On 10/02/15 19:50, Christoph-Simon Senjak wrote:

> I installed geoip, and GEOIPLOOKUP(1) says the databases should be in 
> /nix/store/b952llxwhpd8046r40xkkkjgg1vmcw7q-geoip-1.6.2/share/GeoIP but 
> ... they are not. Is this intentional or is this a bug?

More or less intentional, since the upstream geoip package does not contain a
database. So you should download it yourself and pass the path on the command
line or via the API.

However, if there is a free (as in freedom) database somewhere, we could include
that by default.

-- 
Eelco Dolstra | LogicBlox, Inc. | http://nixos.org/~eelco/
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Missing GeoIP databases

2015-02-10 Thread Bjørn Forsman
On 10 February 2015 at 20:34, Eelco Dolstra  wrote:
> Hi,
>
> On 10/02/15 19:50, Christoph-Simon Senjak wrote:
>
>> I installed geoip, and GEOIPLOOKUP(1) says the databases should be in
>> /nix/store/b952llxwhpd8046r40xkkkjgg1vmcw7q-geoip-1.6.2/share/GeoIP but
>> ... they are not. Is this intentional or is this a bug?
>
> More or less intentional, since the upstream geoip package does not contain a
> database. So you should download it yourself and pass the path on the command
> line or via the API.
>
> However, if there is a free (as in freedom) database somewhere, we could 
> include
> that by default.

The geoip packages used by our ntopng expression are free
(http://dev.maxmind.com/geoip/legacy/geolite/). But I don't see any
versioned archives; they only have unversioned archives that are
updated every month :/

- Bjørn
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Unstable Nixpkgs on stable NixOS (was: Automatically locking the screen with xautolock)

2015-02-10 Thread Jeffrey David Johnson
Wow my approach has been totally different. I think it's better at being 
declarative, but it doesn't allow getting user vs system packages from 
different sources so I have to pick between all stable or all unstable, or 
bother with the details of merging them.

I've got everything defined in a git repo full of .nix files with a submodule 
for my nixpkgs fork. When I want to add a package (or even just change one of 
my dotfiles) I edit the repo and do a nixos-rebuild. I think nix-env still 
references the official release-14.12 channel. Ideally I want it using my repo 
too, but since I barely use it there hasn't been a problem yet.

Short of changing my setup in a big way, maybe I should try merging the 
unstable branch into mine periodically? If that gets messed up I can copy and 
paste specific packages instead.

Also would setting some combination of NIXPKGS, NIXCFG, and NIX_PATH to point 
to my repos let me avoid channels altogether?

Thanks for writing this up! The environment variables, channels_root etc. are 
still sort of black magic to me and it helps.
Jeff

On Mon, 9 Feb 2015 19:17:46 -0800
Michael Alyn Miller  wrote:

> The 02/08/2015 12:21, Jeffrey David Johnson wrote:
> > Nice! I'm currently binding "gksu 'i3lock -c00 &
> > pm-suspend'" to a hotkey using i3, but this is better.  Is
> > there a standard way to get it in my fork of nixpkgs, which is
> > tracking the release-14.12 branch?  (I could just copy and
> > paste but if there's a better way now would be a good time to
> > learn.)
> 
> I certainly don't have an authoritative answer to this question,
> but I'll tell you what I am doing and then maybe someone with
> much more NixOS/Nixpkgs experience will chime in with
> tweaks/guidance.
> 
> In general I have the following goals for my NixOS system:
> 
> 1. I want everything to be reproducible, which of course means that my
>system is configured in /etc/nixos/configuration.nix, but also that
>any packages that I install as an individual user are reproducible as
>well.
> 2. Related to the above, I only want to put "system" packages into
>configuration.nix.  My theory here is that most of my package
>management is going to be done as a regular user, by creating
>project-specific nix-shell environments, etc.  Also, `nix-env -q` is
>a nice way to see everything that is installed and it doesn't appear
>that similar tools exist at the NixOS level.
> 
> Goal #1 means that configuration.nix is pretty basic -- the bare minimum
> number of packages to get my system working and launch an X environment
> (the window manager, for example, but no apps).
> 
> Goal #2 was more complex.  I found the following two references on this
> topic:
> 
> - 
> 
> - 
> 
> I went with the first option because I wanted to be able to use `nix-env
> -q` to see what I had installed, regardless of how I did the install
> (manually or through the "myPackageSelections.nix" file).  I can provide
> more details here if you like.
> 
> With that choice out of the way, the next problem that I ran into was
> getting access to newer packages.  I have contributed a couple of
> packages to Nixpkgs, fixed some others, etc. and so far I had just been
> manually installing them out of my nixpkgs checkout.  This was not
> entirely compatible with my first goal, because now I had installed
> packages that were not listed in my configuration.
> 
> Enter the nixpkgs-unstable channel!  That channel seems to be updated
> pretty frequently and means that changes are available in binary form
> within a few days.  Thankfully my second goal meant that I already had a
> perfect split between stable and unstable: my NixOS configuration could
> stay on 14.12 and my user-level Nixpkgs was free to go unstable.
> 
> Getting this working was relatively straightforward, although I had to
> modify my environment in a couple of unexpected ways (more on this
> later).
> 
> The simple part is the `nix-channel --add 
> https://nixos.org/channels/nixpkgs-unstable`
> and `nix-channel --update` as a normal user.  Some things worked after
> that, but not everything.  `nix-env -i whatever` will happily use the
> nixpkgs-unstable channel because it finds that channel in ~/.nix-defexpr.
> Adding in the -f flag to specify your own expression means that nix-env
> no longer knows about the channel though.  Instead, nix-env falls back
> to NIX_PATH, which by default points at the stable NixOS channel and
> therefore has no idea that you want to use nixpkgs-unstable.
> 
> I found that very confusing, although that's probably because of the
> subtlety that is the myPackageSelections.nix-based way of managing my
> package list.  In any event, I had to dig through the docs to understand
> the situation.
> 
> I fixed this problem by setting NIX_PATH to 
> `nixpkgs=~/.nix

Re: [Nix-dev] Unstable Nixpkgs on stable NixOS (was: Automatically locking the screen with xautolock)

2015-02-10 Thread Kirill Elagin
On Tue Feb 10 2015 at 6:12:58 PM Wout Mertens 
wrote:

> where it would be a lot cleaner to have everything under ~/.nix/
>

`$XDG_CONFIG_HOME/nix`.
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] UI management (Was: Use Haskell for Shell Scripting)

2015-02-10 Thread Ertugrul Söylemez
> Actually, my idea of the core boot sequence is contrary to your goals:
> for core system I think in terms "I want to easily run recovery after
> a USB boot and I want to describe my system in terms of its imperative
> recovery procedure started from initramfs". It is likely that my
> service management will likely also be like that; but I knew you were
> doing something for staring a NixPkgs-like-structured service
> repository and so decided to get by with a minimum set of daemons
> until I can choose to just write service definitions for your approach
> or start my own set of definitions.

You don't have to wait.  If your services are defined in a functional
manner, you won't have much trouble translating them to my concept.
That means that a service is entirely self-contained and receives its
full configuration from its arguments.  It then returns an attribute set
that specifies at the very least the filesystem directories it needs to
access and a non-daemon program that represents the service:

{config, pkgs}: {
description = "nginx web server: " + f config;
init = builtins.toFile "nginx-init.sh" "...";
mounts = ["/var/www"];
}

You should assume that the init program runs as PID 1 and that it will
be killed from outside by the usual SIGTERM-wait-SIGKILL sequence.
SIGTERM is only sent to the init program.  If it does not quit fast
enough, the entire process group is killed by SIGKILL.  Also you should
assume that the init program can only access the Nix store and the
directories explicitly mentioned in `mounts`.  It should assume that the
root directory is a read-only RAM-disk and that `/proc`, `/sys` and
`/dev` are already mounted appropriately.


> Hm, want to try out QueryFS?

It looks very interesting.  Currently I'd have little use for it, but if
it's available online, I will certainly try it out when a use case pops
up.


> OK. Well, are you OK with me inviting some other people from the
> beginning? Not sure who of them will join.

Sure, go ahead.


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


[Nix-dev] Bash completion makes bash startup awefully slow

2015-02-10 Thread Matthias Beyer
Hi,

we already had this on this ML, but the question which was posted (not
by me, btw) got no answer.

I can't live withou bash completion. But using it makes bash awefully
slow at startup. Is there a way to enhance this situation?


-- 
Mit freundlichen Grüßen,
Kind regards,
Matthias Beyer

Proudly sent with mutt.
Happily signed with gnupg.


pgpa6dZjfYWmA.pgp
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Announcing New Ruby Support

2015-02-10 Thread Cillian de Róiste
2015-02-10 19:25 GMT+01:00 Cillian de Róiste :
>
> Hi!
>
> I thought I'd try to package jekyll, but I've run into some trouble...
>
> 2015-01-22 5:29 GMT+01:00 Charles Strahan :
>>
>> To use the new system, first create (or copy over) a Gemfile describing the
>> required gems.
>
>
> I grabbed the Gemfile from the github repo:
> $ wget https://raw.githubusercontent.com/jekyll/jekyll/master/Gemfile
>
>
>> Next, create a Gemfile.lock by running `bundler package --no-install` in the
>> containing directory (this also creates two folders - vendor and .bundle -
>> which you'll want to delete before committing).
>
>
> I had first installed pkgs.bundler, which got me version 1.7.9, which doesn't 
> have the --no-install option which seems to have been added in 1.8. I then 
> installed pkgs.bundler_HEAD which also reports the version to be 1.7.9 but 
> does have the --no-install option. However, I got stuck on the next step:
>
> $ bundler package --no-install
> There are no gemspecs at /home/goibhniu/ruby/jekyll.

The second line has the word "gemspec", I commented it out and was
able to get further

I ran `bundler package --no-install --path vendor/bundle` which
successfully created lockfile.

>> Now, you'll need to dump the lockfile as a Nix expression. To do so, use 
>> Bundix
>> in the target directory:
>>
>>   $(nix-build '' -A bundix)/bin/bundix

To avoid rebuilding the world I checked out 317d78d and tried to
install bundix from there. I had to patch the hash for bundix and
bundler [1], but then I was able to install bundix and create the
gemset.nix \o/

>>
>> That will drop a gemset.nix file in your current directory, which describes 
>> the
>> sources for all of the gems and their respective SHAs.
>>
>> Finally, you'll need to use bundlerEnv to build the gems. The following 
>> example
>> is how we package the sup mail reader:


I copied the example nix expression, changed the name and added it to
all-packages.nix to build it. All went quite well until I got the
following error:

...
Installing liquid 3.0.1

Gem::Ext::BuildError: ERROR: Failed to build gem native extension.

/nix/store/wdjnbdyrvsfbg245qyp3r7qwhlhxzd1c-ruby-1.9.3-p547/bin/ruby
-r ./siteconf20150210-115-km0s7e.rb extconf.rb
creating Makefile

make  clean
building clean-static
building clean

make
building lexer.o
compiling lexer.c
lexer.c: In function 'lex_one':
lexer.c:143:5: error: implicit declaration of function
'rb_enc_raise' [-Werror=implicit-function-declaration]
 rb_enc_raise(utf8_encoding, cLiquidSyntaxError, "Unexpected
character %c", c);

IIUC it fails to find some ruby header file. Any tips on how to proceed?

Cheers,
Cillian


Patches [1]:

diff --git a/pkgs/development/interpreters/ruby/bundix/gemset.nix
b/pkgs/development/interpreters/ruby/bundix/gemset.nix
index a2c633f..a3e732c 100644
--- a/pkgs/development/interpreters/ruby/bundix/gemset.nix
+++ b/pkgs/development/interpreters/ruby/bundix/gemset.nix
@@ -6,7 +6,7 @@
   url = "https://github.com/cstrahan/bundix.git";;
   rev = "5df25b11b5b86e636754d54c2a8859c7c6ec78c7";
   fetchSubmodules = false;
-  sha256 = "0334jsavpzkikcs7wrx7a3r0ilvr5vsnqd34lhc58b8cgvgll47p";
+  sha256 = "1iqx12y777v8gszggj25x0xcf6lzllx58lmv53x6zy3jmvfh4siv";
 };
 dependencies = [
   "thor"
diff --git a/pkgs/development/interpreters/ruby/bundler-head.nix
b/pkgs/development/interpreters/ruby/bundler-head.nix
index 2e8cfc8..43a5961 100644
--- a/pkgs/development/interpreters/ruby/bundler-head.nix
+++ b/pkgs/development/interpreters/ruby/bundler-head.nix
@@ -5,7 +5,7 @@ buildRubyGem {
   src = fetchgit {
 url = "https://github.com/bundler/bundler.git";;
 rev = "a2343c9eabf5403d8ffcbca4dea33d18a60fc157";
-sha256 = "1l4r55n1wzr817l225l6pm97li1mxg9icd8s51cpfihh91nkdz68";
+sha256 = "1fywz0m3bb0fmcikhqbw9iaw67k29srwi8dllq6ni1cbm1xfyj46";
 leaveDotGit = true;
   };
   dontPatchShebangs = true;


-- 
NixOS: The Purely Functional Linux Distribution
http://nixos.org
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


[Nix-dev] Darwin Hydra-builder

2015-02-10 Thread Eric Seidel
Hi all,

Since we merged in the new stdenv for darwin, hydra has been unable to
build any darwin packages (see, e.g.
http://hydra.nixos.org/build/19488052/nixlog/1). The error looks
familiar, I believe the issue is that butters doesn't have Apple's CLI
tools installed.

Could someone with access to butters install the CLI tools so darwin
users on nixpkgs/master can have a binary cache?

Thanks!
Eric
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Use Haskell for Shell Scripting

2015-02-10 Thread Ertugrul Söylemez
>> Oh, Common Lisp (CL) macros don't correspond to TH, but rather to
>> regular functions in Haskell.  We have first class actions together
>> with lazy evaluation.  What is "code is data is code" in CL is
>> "actions are first class values" in Haskell.
>>
>> You only need TH when you need to generate something that is not
>> first class, for example class instances.
>
> And many actions are first class values in CL. But there are many
> things that are not first class even in Haskell. Macros can manipulate
> lexical environments in many convenient ways, for example.
>
> Also, while you could make a function out of a complex iteration
> macro, it is guaranteed to be simpler to use.

Remember that we're talking about very different paradigms.  I can
totally see how you would want and love macros in CL.  In CL you encode
your solution in such a way that using macros is the natural thing to
do, and then lexical manipulation is probably something that comes in
handy.  In Haskell your encoding is entirely different, so you wouldn't
use macros in the first place.  Instead you would use non-strict
higher-order functions together with type classes.


> Ah, so Haskell has no usable records without lenses?

The standard record system is very limited, even though some language
extensions try to improve it (without much success).  However, you would
most likely want to use lenses anyway, even if standard records were
richer.

Modern van Laarhoven lenses are a powerful abstraction for accessing
data structures, mainly because a lens does not necessarily correspond
to a certain concrete field (it can correspond to many fields or to a
logical alternate view of certain data) and also because both lenses and
lens operators are first class values.

Where `temps` is a list of temperatures in degrees Celsius, the
following is that list with three degrees Fahrenheit added to each item:

(traverse . fahrenheit +~ 3) temps

The thing in parentheses is a function that takes any traversable data
structure (e.g. lists or trees), interprets its values as temperatures,
applies a Fahrenheit view and then adds 3.  The result is the modified
data structure (i.e. it's pure).

The following is the same function, except that it only adds if the
original temparature is greater than 10.  All lower temperatures remain
untouched:

traverse . filtered (> 10) . fahrenheit +~ 3

Finally as said lenses and lens operators are first class, so you can
take them as arguments, return them as results and put them into data
structures.  Let `fields` be a list of lenses.  Then the following is a
list of lenses pointing deeper into the individual data structures (into
the `field` subfield of whatever each lens points to):

(traverse %~ (. field)) fields

That's why we love lenses.  They are more than just a recovery from our
lack of a decent record system.  And they are defined in plain Haskell,
just to show you how far we can go without macros. =)


signature.asc
Description: PGP signature
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Announcing New Ruby Support

2015-02-10 Thread Wout Mertens
rb_enc_raise seems to be a ruby 2.0 thing and you're using 1.9.3. Some
mixup somewhere?
https://bugs.ruby-lang.org/issues/5650 was when it was added to 2.0 3 years
ago...


On Tue Feb 10 2015 at 11:05:56 PM Cillian de Róiste <
cillian.deroi...@gmail.com> wrote:

> 2015-02-10 19:25 GMT+01:00 Cillian de Róiste :
> >
> > Hi!
> >
> > I thought I'd try to package jekyll, but I've run into some trouble...
> >
> > 2015-01-22 5:29 GMT+01:00 Charles Strahan :
> >>
> >> To use the new system, first create (or copy over) a Gemfile describing
> the
> >> required gems.
> >
> >
> > I grabbed the Gemfile from the github repo:
> > $ wget https://raw.githubusercontent.com/jekyll/jekyll/master/Gemfile
> >
> >
> >> Next, create a Gemfile.lock by running `bundler package --no-install`
> in the
> >> containing directory (this also creates two folders - vendor and
> .bundle -
> >> which you'll want to delete before committing).
> >
> >
> > I had first installed pkgs.bundler, which got me version 1.7.9, which
> doesn't have the --no-install option which seems to have been added in 1.8.
> I then installed pkgs.bundler_HEAD which also reports the version to be
> 1.7.9 but does have the --no-install option. However, I got stuck on the
> next step:
> >
> > $ bundler package --no-install
> > There are no gemspecs at /home/goibhniu/ruby/jekyll.
>
> The second line has the word "gemspec", I commented it out and was
> able to get further
>
> I ran `bundler package --no-install --path vendor/bundle` which
> successfully created lockfile.
>
> >> Now, you'll need to dump the lockfile as a Nix expression. To do so,
> use Bundix
> >> in the target directory:
> >>
> >>   $(nix-build '' -A bundix)/bin/bundix
>
> To avoid rebuilding the world I checked out 317d78d and tried to
> install bundix from there. I had to patch the hash for bundix and
> bundler [1], but then I was able to install bundix and create the
> gemset.nix \o/
>
> >>
> >> That will drop a gemset.nix file in your current directory, which
> describes the
> >> sources for all of the gems and their respective SHAs.
> >>
> >> Finally, you'll need to use bundlerEnv to build the gems. The following
> example
> >> is how we package the sup mail reader:
>
>
> I copied the example nix expression, changed the name and added it to
> all-packages.nix to build it. All went quite well until I got the
> following error:
>
> ...
> Installing liquid 3.0.1
>
> Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
>
> /nix/store/wdjnbdyrvsfbg245qyp3r7qwhlhxzd
> 1c-ruby-1.9.3-p547/bin/ruby
> -r ./siteconf20150210-115-km0s7e.rb extconf.rb
> creating Makefile
>
> make  clean
> building clean-static
> building clean
>
> make
> building lexer.o
> compiling lexer.c
> lexer.c: In function 'lex_one':
> lexer.c:143:5: error: implicit declaration of function
> 'rb_enc_raise' [-Werror=implicit-function-declaration]
>  rb_enc_raise(utf8_encoding, cLiquidSyntaxError, "Unexpected
> character %c", c);
>
> IIUC it fails to find some ruby header file. Any tips on how to proceed?
>
> Cheers,
> Cillian
>
>
> Patches [1]:
>
> diff --git a/pkgs/development/interpreters/ruby/bundix/gemset.nix
> b/pkgs/development/interpreters/ruby/bundix/gemset.nix
> index a2c633f..a3e732c 100644
> --- a/pkgs/development/interpreters/ruby/bundix/gemset.nix
> +++ b/pkgs/development/interpreters/ruby/bundix/gemset.nix
> @@ -6,7 +6,7 @@
>url = "https://github.com/cstrahan/bundix.git";;
>rev = "5df25b11b5b86e636754d54c2a8859c7c6ec78c7";
>fetchSubmodules = false;
> -  sha256 = "0334jsavpzkikcs7wrx7a3r0ilvr5vsnqd34lhc58b8cgvgll47p";
> +  sha256 = "1iqx12y777v8gszggj25x0xcf6lzllx58lmv53x6zy3jmvfh4siv";
>  };
>  dependencies = [
>"thor"
> diff --git a/pkgs/development/interpreters/ruby/bundler-head.nix
> b/pkgs/development/interpreters/ruby/bundler-head.nix
> index 2e8cfc8..43a5961 100644
> --- a/pkgs/development/interpreters/ruby/bundler-head.nix
> +++ b/pkgs/development/interpreters/ruby/bundler-head.nix
> @@ -5,7 +5,7 @@ buildRubyGem {
>src = fetchgit {
>  url = "https://github.com/bundler/bundler.git";;
>  rev = "a2343c9eabf5403d8ffcbca4dea33d18a60fc157";
> -sha256 = "1l4r55n1wzr817l225l6pm97li1mxg9icd8s51cpfihh91nkdz68";
> +sha256 = "1fywz0m3bb0fmcikhqbw9iaw67k29srwi8dllq6ni1cbm1xfyj46";
>  leaveDotGit = true;
>};
>dontPatchShebangs = true;
>
>
> --
> NixOS: The Purely Functional Linux Distribution
> http://nixos.org
> ___
> nix-dev mailing list
> nix-dev@lists.science.uu.nl
> http://lists.science.uu.nl/mailman/listinfo/nix-dev
>
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


[Nix-dev] DNS fails during nixos-install in Virtualbox

2015-02-10 Thread Joe Hillenbrand
I would like to gently bump this issue:
https://github.com/NixOS/nixpkgs/issues/6066

I'm currently assigned the project to explore the possibility of replacing
all our Puppet+Ubuntu infrastructure with NixOS at my work.

We heavily use Vagrant for our configuration management development, so my
first logical step was to create a Vagrant base image.

My goal is to create an automatically updating NixOS Vagrant image to host
and maintain on vagrantcloud.

The issue is very easy to replicate. Just download the repo and run make
(it does take a while though).

I can't really make sense of what could be causing this. DNS works as soon
as the nixos-install process finishes.

A rerun of nixos-install after the failure allows the installation to
finish, but I suspect it's not actually a successful install.

Best Regards,

-Joe Hillenbrand
joehillen @ freenode
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] A few questions about ARM support and NixOS on a Chromebook

2015-02-10 Thread Luke Clifton
Yes, I have done this method in the past as well, but like you say, it only
helps with C and C++. I'd rather have a set up which works for everything
first, and then selectively try and optimise pieces of it (like using
distcc). Although I was wondering how this would play with Nix and getting
"reproducible" builds. I assume that the cross compilers aren't set up by
nix, and that some sort of override is happening? I'll take a look at the
notes on the wiki.

On 10 February 2015 at 23:14, Wout Mertens  wrote:

> There's another option : build natively with distcc pointing to
> cross-compilers on x86 boxes. All the configuration etc happens natively
> and the compiles themselves are sped up. I wonder if that's the approach
> Vladimír Čunát took earlier? He made the notes on how to set up distcc for
> raspberry pi on the wiki iirc.
>
> Yet another option : run the above option in qemu, running the whole thing on
> an x86 box with the heaviest lifting done natively.
>
> These only speed up C/C++ of course.
>
> Wout.
>
> On Mon, Feb 9, 2015, 5:01 PM Harald van Dijk  wrote:
>
>> On 09/02/2015 15:57, James Haigh wrote:
>>
>> On 09/02/15 14:16, Harald van Dijk wrote:
>>
>> On 09/02/2015 14:55, James Haigh wrote:
>>
>> On 28/01/15 07:42, Luke Clifton wrote:
>>
>> Hi Bjørn,
>>
>>  I have read that thread. I agree with you 100% that native builds (on
>> real or virtual hardware) is the only way this can work. Upstream doesn't
>> usually care if their software can cross compile, and they can't maintain
>> it themselves even if they did. Sometimes it isn't even an option, e.g. GHC
>> still can't cross compile template Haskell yet.
>>
>> I don't understand why cross-compilation is even a thing, other than
>> decades of false assumptions being baked into compilers.
>> As I understand, if a compiler (and by ‘compiler’ I'm referring to
>> the whole toolchain required for compilation) is taking source code and
>> compilation options as input, and giving object code for the specified
>> platform as output, it is called ‘cross-compiling’ if the specified target
>> platform is different to the platform that the compiler is running on. If
>> GCC is running on ARM, compiling code ‘natively’ to ARM successfully, it is
>> counterintuitive that it would fail to build for ARM if GCC is running on
>> x86. And vice versa. A compiler should produce object code for a target
>> platform that implements the source code – it may not have the same
>> efficiency as the output of other compilers (or with other compilation
>> options), but should have the same correctness when execution completes. If
>> the source code being compiled is a specific version of the GCC source code
>> itself, and it is compiled for both x86 and ARM, then if the compilation is
>> computationally correct, both compilations of GCC should produce programs
>> that, although will compute in a different way and with different
>> efficiency, should give the exact same object code when given the same
>> source code and parameters. So if the target platform parameter is ARM,
>> they should both build exactly the same ARM machine code program.
>>
>> All of this is true, but the toolchain usually doesn't have any problems
>> with cross-compilations.
>>
>> However, evidently this is not the case unfortunately. So the
>> compilers or their toolchains are, in essence, receiving the platform that
>> they are running on as ‘input’ to the build, and making assumptions that
>> this build platform has something to do with the target platform. I.e. they
>> are _aware_ of the platform that they're building on, whereas
>> theoretically, they shouldn't be. Apparently this has a lot to do with
>> configure scripts.
>>
>> The configure scripts, or similar machinery in non-autoconf packages, are
>> part of the package, not part of the toolchain. Many programs use runtime
>> checks in configure scripts. A trivial example that hopefully doesn't exist
>> in any real package:
>>
>> If a package only compiles on platforms where sizeof(int) == 4, or where
>> special code is needed on platforms where sizeof(int) != 4, might try to
>> detect those platforms by compiling and linking
>>
>> int main(void) {
>>   return sizeof(int) != 4;
>> }
>>
>> and then executing it. If the execution succeeds (i.e. returns zero),
>> then sizeof(int) == 4. If the execution doesn't succeed, then the configure
>> script assumes that sizeof(int) != 4, even though it's very well possible
>> that the only reason that execution fails is that the generated executable
>> is for a different platform.
>>
>> Other examples are build environments that build generator tools at
>> compile time, and run them to produce the source files to compile at run
>> time. The generator tool must be compiled with the build compiler, not with
>> the host compiler, or execution will fail when cross-compiling. Still, many
>> packages build such tools with the host compiler anyway, because upstream
>> only tests native compilations. This