different versions of glibc-locales in the same profile

2018-12-13 Thread Ricardo Wurmus
Hi Guix,

on a foreign distribution what is the recommended way to install
different versions of glibc-locales into the same profile?

Since glibc-locales install their files into a versioned directory,
having glibc-locales@2.27 in a profile containing glibc-locales@2.28
does not lead to conflicts.  However, Guix refuses to build a manifest
like this because there are two packages with the same name:

--8<---cut here---start->8---
(use-modules (guix inferior) (guix channels)
 (srfi srfi-1))   ;for 'first'

(define inferior-2.27
  (inferior-for-channels
(list (channel
  (name 'guix)
 (url "https://git.savannah.gnu.org/git/guix.git;)
 (commit
  "b2c8d31ff673ca1714be16968c9ead9a99ae2b7b")

(packages->manifest
 (list (first (lookup-inferior-packages inferior-2.27 "glibc-locales"))
   (specification->package "glibc-locales")))
--8<---cut here---end--->8---

Should we add package definitions for older glibc-locales and give them
new names to work around this?  Should we add a property to
glibc-locales to indicate to “guix package” that this package should be
ignored when trying to predict and prevent conflicts?

--
Ricardo




Re: Coq native-inputs: Useless hevea / texlive?

2018-12-13 Thread Julien Lepiller
They must have been useful in older versions, but I didn't pay attention. If 
they are not needed for anything, please go ahead and remove them!

Thank you!

Le 13 décembre 2018 22:20:15 GMT+01:00, Pierre Neidhardt  a 
écrit :
>Hevea and texlive are native-inputs for Coq, however they don't seem to
>be used ever.
>
>https://github.com/coq/coq/blob/V8.8.2/INSTALL does not mention them as
>build dependencies either.
>
>Shall we remove them?



Coq native-inputs: Useless hevea / texlive?

2018-12-13 Thread Pierre Neidhardt
Hevea and texlive are native-inputs for Coq, however they don't seem to
be used ever.

https://github.com/coq/coq/blob/V8.8.2/INSTALL does not mention them as
build dependencies either.

Shall we remove them?

-- 
Pierre Neidhardt
https://ambrevar.xyz/


signature.asc
Description: PGP signature


Re: Why is GCL built with gcc@4.9?

2018-12-13 Thread Efraim Flashner
On Thu, Dec 13, 2018 at 12:03:59AM -0500, Mark H Weaver wrote:
> Hi Efraim,
> 
> I'm curious about this commit of yours from April 2017:
> 
> --8<---cut here---start->8---
> commit 5c7815f205e9164d4b82378de91bee7a65bcfbcb
> Author: Efraim Flashner 
> Date:   Mon Apr 10 05:20:09 2017 +0300
> 
> gnu: gcl: Build with gcc@4.9.
> 
> * gnu/packages/lisp.scm (gcl)[native-inputs]: Add gcc@4.9.
> --8<---cut here---end--->8---
> 
> Do you remember why you did this?  There's no explanation in the
> comments, nor in the commit log, nor in the 'bug-guix' or 'guix-devel'
> email archives from around that time.
> 
> I'd like to remove it, and I've verified that on x86_64-linux, GCL
> builds successfully with the default compiler.
> 
> In general, it would be good to include comments with rationale for
> workarounds like this, so that we have some idea of when the workaround
> can be removed, and what tests we must do to ensure that the original
> problem has been addressed.
> 
>  Thanks,
>Mark

I looked through the commits and I'm not sure why I added gcc@4.9. When
did we change our default gcc from 4.9 to 5? I've made one attempt so
far at building on aarch64-linux without gcc@4.9 and I got a core-dump
but I haven't built it recently to see if it does that as-is.

I'll take a closer look at it and try to see what's up.


-- 
Efraim Flashner  אפרים פלשנר
GPG key = A28B F40C 3E55 1372 662D  14F7 41AA E7DC CA3D 8351
Confidentiality cannot be guaranteed on emails sent or received unencrypted


signature.asc
Description: PGP signature


Re: 01/01: hydra: Increase image sizes for USB image and Flash image.

2018-12-13 Thread Leo Famulari
On Wed, Dec 12, 2018 at 09:17:23AM +0100, Giovanni Biscuolo wrote:
> > I’m in favour of moving them elsewhere, such as %desktop-services.
>
> yes please: sound related services are not-so-base, we do not need them
> on installation/web/mail/DNS et. al servers (and containers) and it does
> not makes much sense to remove them an all that class of
> hosts/containers
> 
> it makes sense - semantically speaking - to move sound to
> %desktop-services since we only need sound on desktops
 
I would prefer if sound services were removed from the installation
system rather than from the %base-services. I am using systems based on
%base-services for music playback and other audio work. They are not
desktop systems — there is no graphical interface to these machines.


signature.asc
Description: PGP signature


Re: End of beta soon? drop i686?

2018-12-13 Thread Mark H Weaver
Hi Danny,

Danny Milosavljevic  writes:

>> Note that we also lost 'icecat' on armhf-linux with the 52->60 upgrade,
>> because our 'rust' packages have never worked on armhf-linux.
>
> Wait, what?  I wasn't aware.  Let's track this as a bug - that's
> definitely not supposed to happen.
>
> mrustc works on armhf - I tested it on physical armhf hardware before merging.
>
> So one of the other Rusts doesn't work?  I'll check out Hydra logs...
>
> https://hydra.gnu.org/build/3215481/nixlog/1/raw indicates a timeout of
> silence - we might want to increase it.  (this particular step takes many
> many MANY minutes on x86_64, too).
>
> Would that be the "(properties '((timeout . 3600)" thing?  Or is a
> "timeout of silence" an extra setting?

I see that you increased the timeouts for rust-1.19.0, and I asked Hydra
to retry the build.  It failed again, in a different way:

  https://hydra.gnu.org/build/3215481/nixlog/2/raw

  Mark




Re: Question about Guix documentation

2018-12-13 Thread Laura Lazzati
yes, I knew about the I/O operations stuff, I turned my fancy retroPC
on just to check, it is not a laptop, it is a desktop PC. It is has a
monocore Pentium 4 3.2 Ghz, and a Parallel ATA hard drive. Please. do
not say it is fancy anymore, I have just written a post for my blog
about it being fancy :)

Regards!
Laura



Re: End of beta soon? drop i686?

2018-12-13 Thread swedebugia

On 2018-12-12 08:40, Andreas Enge wrote:

On Wed, Dec 12, 2018 at 03:16:56AM +0100, Ricardo Wurmus wrote:

I'm opposed to dropping i686 support.  If we dropped support for systems
that are not well supported in Guix, the only system left standing would
be x86_64-linux.  I believe it is of paramount importance that Guix be
portable to multiple processor architectures.  Even if we are not yet
able to provide a good user experience on other systems, we can still
keep the bulk of our code portable and multi-architecture aware.


I whole-heartedly agree with Mark.  We will not drop i686 support.


And a big advantage of i686, even if noone were to use it anymore, is that
we have a large build farm capacity, since it is built on the x86_64 machines.
So it can act as our "canary in the mine", warning of portability problems.


Thanks for sharing your views on this.

I suggest we clearly state in the download page and or in the manual 
(Limitations) that i686 is a somewhat rough experience with not as many 
substitutes as x86_64 and might have more bugs too.


--
Cheers
Swedebugia



500+ Gnome-shell-extensions - how to handle?

2018-12-13 Thread swedebugia

Hi people

How do we best handle gnome-shell-extensions?

They seem needing to be registered via gsettings (see example in the 
install.sh attached) so a service for exporting the installed extensions 
seems necessary.


So in a config we would specify:
(shell-extension-service-type =>
(inherit config)
(extensions
(list (cpufreq-konkor
...)

Things to ponder:
- what naming scheme?
- new importer for extensions?
- new build-system parsing the metadata.json?

Sum up of findings:
- 90% of the extensions had a link to a github repository and had a 
license clearly specified there.
- No license information on https://extensions.gnome.org (not suitable 
for import in my view)

- No dependencies found for extensions (they don't depend on each other)
- No need to support gnome-shell-js install via browser
- gse (gnome-shell-extensions) proposed as a prefix for these packages.
- author as they appear on https://extensions.gnome.org proposed as suffix.

--
NAMING:
What do we call them?
e.g. https://extensions.gnome.org/extension/1082/cpufreq/ "by konkor"
https://github.com/konkor/cpufreq -> gse-cpufreq-konkor?

as opposed to this one
https://extensions.gnome.org/extension/47/cpu-frequency/ "by ojo"
https://github.com/azdyb/gnome-shell-extension-cpufreq/tree/master/cpufreq%40zdyb.tk 
-> gst-cpufreq-ojo?


In the attached install.sh. There gse-cpufreq-konkor is listed with a 
UUID as "cpufreq@konkor" (from 
https://github.com/konkor/cpufreq/blob/master/install.sh)


-
Here is an example without an install script (which seems to be a common 
case):


cpupower:
https://github.com/martin31821/cpupower

There is a 
https://github.com/martin31821/cpupower/blob/master/metadata.json with 
the following information:


{
"localedir":"/usr/local/share/locale",
"shell-version": [
"3.10", "3.12", "3.14", "3.16", "3.18", "3.20", "3.22", "3.24", 
"3.26"
],
"uuid": "cpupo...@mko-sl.de",
"name": "CPU Power Manager",
"url": "https://github.com/martin31821/cpupower;,
"description": "Manage Intel_pstate CPU Frequency scaling driver",
"schema": "org.gnome.shell.extensions.cpupower"
}

Based on this I guess we could import gnome-shell-extensions by pointing 
to the git-repository of them. Those without a git repository are not 
free because the source is not available...


Any thoughts?

--
Cheers
Swedebugia


install.sh
Description: application/shellscript


Need help rdelim. Trying to add caching to the npm-explorer

2018-12-13 Thread swedebugia

Hi

I get this error when I run the script testing the http-fetch proc.

sdb@komputilo ~/guile-npm-explorer$ guile -s npm-explorer.scm
fetching
allocate_stack failed: Cannot allocate memory

Any ideas what is wrong?
I think the error is on line 57. I tried with get-char/get-string-all 
and both fail the same way.


Maybe this is because I have to read with a loop and rdelim? Does anyone 
have a simple example of that?
The manual is very terse on this subject unfortunately and a quick 
search did not help.


--
Cheers
Swedebugia
(use-modules (guix import json)
	 (guix build utils)
	 (guix import utils)
	 (guix http-client)
	 (srfi srfi-34)
	 (ice-9 regex)
	 (ice-9 textual-ports)
	 (json))

;; from https://gitlab.com/swedebugia/guix/blob/08fc0ec6fa76d95f4b469aa85033f1b0148f7fa3/guix/import/npm.scm
(define (node->package-name name)
"Given the NAME of a package on npmjs, return a Guix-compliant
name for the
package. We remove the '@' and keep the '/' in scoped
packages. E.g. @mocha/test -> node-mocha/test"
(cond ((and (string-prefix? "@" name)
		(string-prefix? "node-" name))
	   (snake-case (string-drop name 1)))
	  ((string-prefix? "@" name)
	(string-append "node-" (snake-case (string-drop
			name 1
	  ((string-prefix? "node-" name)
	   (snake-case name))
	  (else
	   (string-append "node-" (snake-case name)

(define (slash->_ name)
  (if (string-match "[/]" name)
  (regexp-substitute #f (string-match "/+" name)
			 'pre "_slash_" 'post)
  ;;else
  name))

(define (read-file file)
  (call-with-input-file file
(lambda (port)
(json->scm port

;; from
;; http://git.savannah.gnu.org/cgit/guix.git/tree/guix/import/json.scm
;; adapted to return unaltered JSON
(define* (http-fetch url
 ;; Note: many websites returns 403 if we omit a
 ;; 'User-Agent' header.
 #:key (headers `((user-agent . "GNU Guile")
  (Accept . "application/json"
  "Return a JSON resource URL, or
#f if URL returns 403 or 404.  HEADERS is a list of HTTP headers to pass in
the query."
  (guard (c ((and (http-get-error? c)
  (let ((error (http-get-error-code c)))
(or (= 403 error)
(= 404 error
 #f))
(let* ((port   (http-fetch url #:headers headers))
	   ;; changed the upstream here to return unaltered json:
   (result (get-string-all port)))
  (close-port port)
  result)))

(define (cache-handler name)
  ;;check if cached in cache-dir
  (let* ((cache-dir (string-append (getenv "HOME") "/.cache/npm-explorer"))
	 ;; sanitize name to fit in cli-context on disk
	 ;; it can contain @ and /
	 (cache-name (slash->_ (node->package-name name)))
	 (filename (string-append cache-dir "/" cache-name ".package.json")))
(if (file-exists? filename)
	;;yes
	(read-file filename)
	;;no
	(begin
	  (when (not (directory-exists? cache-dir))
	(mkdir-p cache-dir))
	  ;; port closes when this closes
	  (call-with-output-file filename
	(lambda (port)
	  (display
	   ;; this gives os the result-closure and we write it out
	   (http-fetch
		(string-append
		 "https://registry.npmjs.org/;
		 name))
	   port)))
	  ;; get the content and close
	  (read-file filename)

(define (get-npm-module-dot name done level)
  (if (member name done)
  done
  ;; convert return from cache to hashtable
  (let ((descr (cache-handler name)))
	(if descr
	(catch #t
	  (lambda ()
		(let* ((latest (hash-ref (hash-ref descr "dist-tags") "latest"))
		   (descr (hash-ref (hash-ref descr "versions") latest))
		   (devdeps (hash-ref descr "devDependencies"))
		   (deps (hash-ref descr "dependencies")))
		  (if deps
		  (hash-fold
		   (lambda (key value acc)
			 (begin
			   (format (current-error-port) "level ~a: ~a packages\r" level (length acc))
			   (format #t "\"~a\" -> \"~a\";~%" name key)
			   (get-npm-module-dot key acc (+ 1 level
		   (cons name done) deps)
		  (cons name done
	  (lambda _
		(format #t "~a [color=red];~%" name)
		(cons name done)))
	(cons name done)

;; (format #t "digraph dependencies {~%")
;; (format #t "overlap=false;~%")
;; (format #t "splines=true;~%")
;; (get-npm-module-dot "mocha" '() 0)
;; (format (current-error-port) "~%")
;; (format #t "}~%")

;;test
;;(display (slash->_ "babel/mocha")) ;works
;;(cache-handler "@babel/core") ;no errors but does not write to file. hmm..
(display "fetching")
(newline)
(display;fails in a weird way...
 (http-fetch
  (string-append  "https://registry.npmjs.org/; "mocha")))


Re: CDN performance

2018-12-13 Thread Giovanni Biscuolo
Hi Chris,

nice to see this discussion, IMHO how GuixSD subsitutes are distributed
is a key issue in our ecosystem and is _all_ about privacy and metadata
*mass* collection

most "normal users" are not concerned about this so they are fine with
super-centralization since it's a convenience... not us :-)

personally I've come to GuixSD because I see this project as a key part
in liberating me from this class of problems

Chris Marusich  writes:

[...]

> A summary, in the middle of the long thread, is here:
>
> https://lists.debian.org/debian-project/2013/10/msg00074.html

thank you for the reference, I've only read this summary

the key part of it IMHO is
  "Q: Do CDNs raise more security/privacy concerns than our mirrors?"

and the related subthread
 https://lists.debian.org/debian-project/2013/10/msg00033.html

the quick reply to the above question is: yes, CDNs raise more
secutiry/privacy concerns than "distributed mirrors"

obviuosly "distributed mirrors" _does_ rise some security/privacy
concerns but *centralization*... much more

[...]

> Judging by that email thread, one of the reasons why Debian considered
> using a CDN was because they felt that the cost, in terms of people
> power, of maintaining their own "proto-CDN" infrastructure had grown too
> great.

I'm still new to guixsd but understood enough to se we are much more
well equipped to maintain our distributed network of substitutes caching
servers... **transparently** configured :-)

[...]

> I also understand Hartmut's concerns.  The risks he points out are
> valid.  Because of those risks, even if we make a third-party CDN option
> available, some people will choose not to use it.

probably I'll be one of those, I'm considering to maintain a caching
substitute server in a "semi-trusted" colocated space and I'd be very
happy to share that server with the community

[...]

> However, not everyone shares the same threat model.  For example,
> although some people choose not to trust substitutes from our build
> farm, still others do.

for this very reason IMHO we should work towards a network of **very
trusted** build farms directly managed and controlled by the GuixSD
project sysadmins; if build farms will be able to quickly provide
substitutes, caching mirrors will be _much more_ effective than today

... and a network of "automated guix challenge" servers to spot
not-reproducible software in GuixSD

with a solid infrastructure of "scientifically" trustable build farms,
there are no reasons not to trust substitutes servers (this implies
working towards 100% reproducibility of GuixSD)

> The choice is based on one's own individual
> situation.  Similarly, if we make a third-party CDN option available and
> explain the risks of using it, Guix users will be able to make an
> educated decision for themselves about whether or not to use it.

that's an option... like a "last resort" in order to be able to use
guixSD :-)

we could also teach people how to setup their own caching servers and
possibly share them with the rest of the local community (possibly with
some coordination effort from the project sysadmins)

for Milan I've plans to setup such caching mirror in Jan 2019

[...]

happy hacking!
Gio

-- 
Giovanni Biscuolo

Xelera IT Infrastructures


signature.asc
Description: PGP signature


Re: Using a CDN or some other mirror?

2018-12-13 Thread Giovanni Biscuolo
Hi Chris,

thank you for your CDN testing environment!

Chris Marusich  writes:

[...]

> For experimentation, I've set up a CloudFront distribution at
> berlin-mirror.marusich.info that uses berlin.guixsd.org as its origin
> server.  Let's repeat these steps to measure the performance of the
> distribution from my machine's perspective (before I did this, I made
> sure the GET would result in a cache hit by downloading the substitute
> once before and verifying that the same remote IP address was used):

[...]

> It would be interesting to see what the performance is for others.

[...]

measures from my office network: Italy, 20Km north Milan, FTTC
(90Mbit/sec measured bandwidth)

measure from Berlin:

--8<---cut here---start->8---
url_effective: 
https://berlin.guixsd.org/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
  http_code: 200
  num_connects: 1
  num_redirects: 0
  remote_ip: 141.80.181.40
  remote_port: 443
  size_download: 69899433 B
  speed_download: 9051388,000 B/s
  time_appconnect: 0,229271 s
  time_connect: 0,110443 s
  time_namelookup: 0,061754 s
  time_pretransfer: 0,229328 s
  time_redirect: 0,00 s
  time_starttransfer: 0,326907 s
  time_total: 7,722509 s
--8<---cut here---end--->8---

latency measured with mtr:

--8<---cut here---start->8---
HOST: roquette  Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 10.38.2.1  0.0%100.3   0.4   0.3   0.4   0.0

[...]

 18.|-- 141.80.181.40  0.0%10  112.5  77.1  55.6 201.7  47.1
--8<---cut here---end--->8---


from your mirror (third download):

--8<---cut here---start->8---
url_effective: 
https://berlin-mirror.marusich.info/nar/gzip/1bq783rbkzv9z9zdhivbvfzhsz2s5yac-linux-libre-4.19
  http_code: 200
  num_connects: 1
  num_redirects: 0
  remote_ip: 54.230.102.61
  remote_port: 443
  size_download: 69899433 B
  speed_download: 9702091,000 B/s
  time_appconnect: 0,172660 s
  time_connect: 0,037833 s
  time_namelookup: 0,003772 s
  time_pretransfer: 0,173263 s
  time_redirect: 0,00 s
  time_starttransfer: 0,212716 s
  time_total: 7,204574 s
--8<---cut here---end--->8---

latency measured with mtr:

--8<---cut here---start->8---
HOST: roquette   Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 10.38.2.1   0.0%100.4   0.4   0.4   0.4   0.0

[...]

 11.|-- ???100.0100.0   0.0   0.0   0.0   0.0
 12.|-- ???100.0100.0   0.0   0.0   0.0   0.0
 13.|-- ???100.0100.0   0.0   0.0   0.0   0.0
 14.|-- ???100.0100.0   0.0   0.0   0.0   0.0
 15.|-- 52.93.58.1900.0%10   36.1  34.6  32.9  37.1   1.2
 16.|-- ???100.0100.0   0.0   0.0   0.0   0.0
--8<---cut here---end--->8---

100% loss?

from here it seems Berlin is as performant as CloudFront

HTH!
Gio

-- 
Giovanni Biscuolo

Xelera IT Infrastructures




signature.asc
Description: PGP signature


Re: Question about Guix documentation

2018-12-13 Thread swedebugia

On 2018-12-03 00:39, Laura Lazzati wrote:

1GB of RAM? That is a *lot* :)
I have 2 GB on my 2 day-to-day laptops. 1 with GNOME3 GuixSD and 1 with
Parabola+MATE. Both work fine and I rarely have to wait for other
programs than the really heavy ones (Libreoffice comes to mind).

Oh no! my retroPC is fancy :O and I did not know :) I was going to add
that I did almost up to 4th year of university with it but felt
ashamed. I guess you know where the name  MATE comes from ;), it is at
the end of the site. I will check the processor. because I don't
remember how many cores it has.


Actually what makes a bigger difference than CPU speed/number of cores 
is the hdd/ssd speed. Because of the heavy store/hardlinking/profile 
generation hooks etc. Guix is quite disk I/O intensive.


So installing ssd in your old laptop would probably speed things up. 
They are cheep these days and you can get away with buying a small one 
for little money 32Gb is enough if you trim generations and run guix gc 
regularly).


--
Cheers
Swedebugia



Re: CDN performance

2018-12-13 Thread Chris Marusich
Ludovic Courtès  writes:

> Regarding the GNU sub-domain, as I replied to Meiyo, I’m in favor of it,
> all we need is someone to champion setting it up.

I could help with this.  Whom should I contact?

>> Regarding CDNs, I definitely think it's worth a try!  Even Debian is
>> using CloudFront (cloudfront.debian.net).  In fact, email correspondence
>> suggests that as of 2013, Amazon may even have been paying for it!
>>
>> https://lists.debian.org/debian-cloud/2013/05/msg00071.html
>
> (Note that debian.net is not Debian, and “there’s no cloud, only other
> people’s computer” as the FSFE puts it.)

I do try to avoid the term "cloud" whenever possible.  It's hard to
avoid when it's in the product name, though!  A wise man once said, "A
cloud in the mind is an obstacle to clear thinking."  ;-)

You may be right about debian.net.  I don't know who owns that domain.
It's confusing, since debian.org is definitely owned by the Debian
project, and the following page says they're using Amazon CloudFront:

https://deb.debian.org/

Maybe Debian still uses Amazon CloudFront, or maybe they don't any more.
In any case, I've found the following email thread, which documents a
thoughtful discussion regarding whether or not Debian should use a CDN.
They discussed many of the same concerns we're discussing here.

https://lists.debian.org/debian-project/2013/10/msg00029.html

A summary, in the middle of the long thread, is here:

https://lists.debian.org/debian-project/2013/10/msg00074.html

Later, part of the thread broke off and continued here:

https://lists.debian.org/debian-project/2014/02/msg1.html

That's as far as I've read.

Judging by that email thread, one of the reasons why Debian considered
using a CDN was because they felt that the cost, in terms of people
power, of maintaining their own "proto-CDN" infrastructure had grown too
great.  I believe it!  I think it would be ill-advised for the Guix
project to expend effort and capital on building and maintaining its own
CDN.  I think it would be wiser to focus on developing a decentralized
substitute solution (GNUnet, IPFS, etc.).

That said, I still think that today Guix should provide a third-party
CDN option.  For many Guix users, a CDN would improve performance and
availability of substitutes.  Contracting with a third party to provide
the CDN service would require much less effort and capital than building
and maintaining a CDN from scratch.  This would also enable the project
to focus more on building a decentralized substitute solution.  And once
that decentralized solution is ready, it will be easy to just "turn off"
the CDN.

I also understand Hartmut's concerns.  The risks he points out are
valid.  Because of those risks, even if we make a third-party CDN option
available, some people will choose not to use it.  For that reason, we
should not require Guix users to use a third-party CDN, just as we do
not require them to use substitutes from our build farm.

However, not everyone shares the same threat model.  For example,
although some people choose not to trust substitutes from our build
farm, still others do.  The choice is based on one's own individual
situation.  Similarly, if we make a third-party CDN option available and
explain the risks of using it, Guix users will be able to make an
educated decision for themselves about whether or not to use it.

>> Here, it took 0.459667 - 0.254210 = 0.205457 seconds (about 205 ms) to
>> establish the TCP connection after the DNS lookup.  The average
>> throughput was 1924285 bytes per second (about 40 megabits per second,
>> where 1 megabit = 10^6 bits).  It seems my connection to berlin is
>> already pretty good!
>
> Indeed.  The bandwidth problem on berlin is when you’re the first to
> download a nar and it’s not been cached by nginx yet.  In that case, you
> get very low bandwidth (like 10 times less than when the item is cached
> by nginx.)  I’ve looked into it, went as far as strace’ing nginx, but
> couldn’t find the reason of this.
>
> Do you any idea?

I made a typo here.  The value "1924285" should have been "4945831",
which is what measure_get printed.  However, the intended result (40
Mbps) is still correct.

Actually, I thought 40 megabits per second was pretty great for a
single-threaded file transfer that originated in Europe (I think?) and
terminated in Seattle (via my residential Comcast downlink).  I
requested that particular file many times before that final test run, so
it was probably already cached by nginx.

However, I believe you when you say that it's slow the first time you
download the substitute from berlin.  What path does the data take from
its origin through berlin?  If berlin needs to download the initial file
from another server, perhaps the connection between berlin and that
server is the bottleneck?  Maybe we should discuss that in a different
email thread, though.

> I’ve tried this from home (in France, with FTTH):
>
> [...]
>
> speed_download: 20803402.000 B/s

Wow,