[Tails-dev] Weird commits on stable

2017-07-13 Thread Arnaud
Hey Tails,

today I checked out the stable branch, and I saw some very weird
commits. For example.

060f5a68db3531a77cabc9864891e1b7f84a41ec

+quick loans with a guarantor
+
+  http://bestsololoan.com/ - payday loans
+
+http://bestsololoan.com/;>faxless payday advance
+
+ashland payday loan

Is it a normal commit, or is it a new kind of spam ?

Just to let you know ;)

Cheers,
  Arnaud
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

[Tails-dev] About SquashFS default compression setting

2017-05-11 Thread Arnaud
Dear Tails,

this is just a detail I'd like to discuss.

When I did my first Tails build a while ago, I didn't specify any
SquashFS compression setting. So I expected the compression to be the
default. And amazingly, there's a setting called 'defaultcomp', which
correspond to xz compression. So I was a bit surprised to see in the
logs that I ended up with gzip compression instead.

What I'm trying to say is that, from the point of view of the person who
builds Tails, there's no 'default' compression. The compression is
automatically chosen by the build system: xz for release, gzip otherwise
(tell me if I'm mistaken).

So I think it's a bit misleading to have a setting named 'defaultcomp',
and it would be better named 'xzcomp'.

What do you think ? I can open an issue and provide a patch if you agree
with me.

Cheers !


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Tails build system update

2017-05-11 Thread Arnaud
Hi,

> In order to get Tails builds reproducible, we had to refactor the way we
> use vagrant in our build system [1]. Under the hood, a lot of our build
> scripts have changed, but for most use cases the transition should be
> transparent.

I have made some very minor cosmetic changes, I don't know exactly
what's the best way to submit it, I don't think it's worth opening a
ticket. You can have a quick look on my branch available here:

https://gitlab.com/arnaud-preevio/tails/commits/arnaud/build-cosmetic

Cheers,

Arnaud
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Tails build system update

2017-05-11 Thread Arnaud
Dear Tails,

I just checked out the **stable** branch and tried the new build system.
Everything worked fine and smoothly on my machine (running Debian 9).

I've seen some warnings here and there, that were harmless but I'll
report it anyway just in case it matters.

-- During Box Creation --

During the execution of
`vagrant/definitions/tails-builder/postinstall.sh`, the command `apt-get
update` generates this warning. I see the same warning later on, after
the box is created, for each `apt-get update` I think.

W: Conflicting distribution:
http://time-based.snapshots.deb.tails.boum.org jessie/updates InRelease
(expected jessie but got jessie/updates)

During `apt-get -y dist-upgrade`, and later on during various `apt-get
install ...`.

E: Can not write log (Is /dev/pts mounted?) - posix_openpt (19: No
such device)

-- Misc --

In the Vagrantfile, I've seen that there are still 4 lines mentioning
the box name, url and checksum. Is it still needed ?

Cheers

Arnaud

On 05/10/2017 03:05 AM, bertagaz wrote:
> Hi,
>
> In order to get Tails builds reproducible, we had to refactor the way we
> use vagrant in our build system [1]. Under the hood, a lot of our build
> scripts have changed, but for most use cases the transition should be
> transparent.
>
> We released today all this changes, so for developers that are building
> Tails, we strongly advise to have a look at the build documentation
> [2] and adapt your usage.
>
> [1] see tickets #11972, #11979, #11980, #11981 or #12409
> [2] https://tails.boum.org/contribute/build
>
> bert.
> ___
> Tails-dev mailing list
> Tails-dev@boum.org
> https://mailman.boum.org/listinfo/tails-dev
> To unsubscribe from this list, send an empty email to 
> tails-dev-unsubscr...@boum.org.


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] QEmu guest agent in tails builder

2017-05-08 Thread Arnaud
Oups, edit to the previous mail.

> $ sudo systemctl enable acpid
> $ sudo systemctl status acpid

In the second line, replace status with start.

Moreove, `acpid` requires additional configuration that I forgot to
mention. It seems that by default it does nothing.

So, let's create an event to handle the power button.

$ echo 'event=button/power
action=/etc/acpi/actions/poweroff.sh %e
' > /etc/acpi/events/power-button

Then, create the action script to handle this event.

$ mkdir /etc/acpi/actions
$ echo '#!/bin/sh
/sbin/shutdown -h now
' > /etc/acpi/actions/poweroff.sh
$ chmod +x /etc/acpi/actions/poweroff.sh

With that, it seems now that the VM shuts down properly on `virsh shutdown`.

Cheers,
Arnaud


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] QEmu guest agent in tails builder

2017-05-08 Thread Arnaud

On 04/29/2017 07:39 PM, intrigeri wrote:
> I'm a bit confused: our infrastructure uses libvirt/KVM, and "virsh
> shutdown" works nicely. Back in the pre-systemd days, we had to
> install acpid in the VMs to enable this functionality, and now
> systemd-logind handles it by default in the KVM guests.
>
> So I wonder: what's special about our Vagrant build VM, that prevents
> this feature from working without qemu-guest-agent?

In the Vagrant build VM, there is no `systemd-logind` installed. As for
`acpid`, it's installed, but it's not running.

$ sudo systemctl status acpid
. acpid.service - ACPI event daemon
  Loaded: loaded (/lib/systemd/system/acpid.service; disabled)
  Active: inactive (dead)

Notice the 'disabled' here.

So I enabled and started acpid.

$ sudo systemctl enable acpid
$ sudo systemctl status acpid

Now the VM responds to `virsh shutdown` !

Do you guys think it's something that was forgotten in the base box ?
Should it be added to the provision script (yep it's a bit ugly), or we
just keep on living without it ?

Cheers !

Arnaud.

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] QEmu guest agent in tails builder

2017-04-23 Thread Arnaud


On 04/22/2017 11:23 PM, anonym wrote:
> Arnaud:
>> On 04/22/2017 06:16 PM, anonym wrote:
>>> If the fix I proposed above isn't good enough for you, I suggest
>>> you wait and return to this issue in a few months -- there's an
>>> ongoing rework of how we use Vagrant which moves in the direction
>>> where each build will start with a completely fresh (first boot)
>>> builder VM, and whenever a build finishes (success or not) the VM
>>> is immediately destroyed and undefined. So I think your problem
>>> will just go away once this work is merged into our main branches.
>>> Fair enough?
>> Alright I didn't know that. This will indeed solve the problem. Be sure
>> to include an easy way to see the logs when a build fails, and to let
>> the VM alive so that one can log in and investigate ;)
> Which logs? The .buildlog is what you see written to stdout/stderr, so I 
> guess you mean something else (?).

Indeed that's what I mean, it's all on stdout/stderr you're right.

> Any way, on failure the VM is not destroyed unless the `forcecleanup` option 
> is used, and on success it is destroyed unless `keeprunning` option is used, 
> so I think you'll have what you want. Furthermore there's a `rescue` option 
> which, when set, implies `keeprunning` *and* will prevent the build directory 
> to be cleaned up after a failure, so that is an improvement in this area I 
> think you'll agree.

Indeed it sounds appealing ;)

> Actually, ss our most active user of the build system outside of the "core" 
> Tails contributors, I'd really be interested in your feedback on what we have 
> done so far, and if you can think of improvements vs the workflow you have. 
> If you are interested, please checkout the wip/11972-use-vagrant-in-jenkins 
> branch and play around. :) Please backup your vagrant/.vagrant directory in 
> case you want to return back to the old build system!

Thanks for the info, I will give it a try asap, which means in 2 or 3 weeks.

> Another change with this branch is that from now on the base box (i.e. 
> original builder VM image) is built locally (using only Debian packages, 
> instead of downloading a big binary blob from us that you have to trust). 
> Since your Internet connection is so glitchy I suggest that you setup 
> apt-cacher-ng on your host system and run this the first time you use this 
> branch:
>
> http_proxy=http://192.168.121.1:3142 rake basebox:create
>
> i.e. so packages are cached on your host when building the base box, so if 
> there's a network issue and you have to restart the caching will help you.

Ok I was thinking about setting up my own apt-cacher-ng instance indeed.
It's one more reason to do so, I can't delay anymore :)

>> I guess this rework implies that you move `apt-cacher-ng out of the
>> builder, right ? Do you plan to include it in another VM, or you will
>> just remove it completely, and let people handle it by themselves ?
> The cache is moved to a separate VM disk which is the only thing kept between 
> builds. I think you'll like this! :)

Yep this is something I need badly, having apt-cacher on the host, or
another VM, or wherever, anywhere but NOT on the builder machine. I
should have started by that, it's nothing complicated and it's well
documented.

Ok thanks for all the details, I'll try the branch you mentioned and
give you some feedback.

Cheers !


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] QEmu guest agent in tails builder

2017-04-22 Thread Arnaud

On 04/22/2017 06:16 PM, anonym wrote:
> Arnaud:
>> Dear Tails,
>>
>> I'd like to propose a little improvement to the Tails Builder VM.
>>
>> My use-case is that I build Tails locally on my laptop. My concern is a
>> little lack of convenience: if ever I forget to close the Tails Builder
>> VM (using `rake vm:halt`) after I'm done, the laptop gets stuck at
>> shutdown. I can see these log messages:
>>
>> libvirt-guests.sh: Waiting for guest xxx to shut down, 300 seconds left
>> libvirt-guests.sh: Waiting for guest xxx to shut down, 295 seconds left
>> ...
> Another way to work around this is to set `SHUTDOWN_TIMEOUT=0` in 
> `/etc/default/libvirt-guests`.

Yeah I'm not a big fan of that since it's a dirty shutdown. I'm already
struggling enough with the builds, I don't want to add this kind of
additional questions, like: "Is my build failing again because somehow
my VM is corrupted because of these continuous unclean shutdowns ?
Alright, just in case, let's destroy it and start all over again".

Then it's two more hours waiting...

Although it's a possible solution for sure, you're right to mention it.

> If the fix I proposed above isn't good enough for you, I suggest you wait and 
> return to this issue in a few months -- there's an ongoing rework of how we 
> use Vagrant which moves in the direction where each build will start with a 
> completely fresh (first boot) builder VM, and whenever a build finishes 
> (success or not) the VM is immediately destroyed and undefined. So I think 
> your problem will just go away once this work is merged into our main 
> branches. Fair enough?

Alright I didn't know that. This will indeed solve the problem. Be sure
to include an easy way to see the logs when a build fails, and to let
the VM alive so that one can log in and investigate ;)

I guess this rework implies that you move `apt-cacher-ng out of the
builder, right ? Do you plan to include it in another VM, or you will
just remove it completely, and let people handle it by themselves ?

Cheers !

Arnaud

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

[Tails-dev] QEmu guest agent in tails builder

2017-04-22 Thread Arnaud
Dear Tails,

I'd like to propose a little improvement to the Tails Builder VM.

My use-case is that I build Tails locally on my laptop. My concern is a
little lack of convenience: if ever I forget to close the Tails Builder
VM (using `rake vm:halt`) after I'm done, the laptop gets stuck at
shutdown. I can see these log messages:

libvirt-guests.sh: Waiting for guest xxx to shut down, 300 seconds left
libvirt-guests.sh: Waiting for guest xxx to shut down, 295 seconds left
...

So basically, what happens is that systemd tries to stop libvirt guests,
by invoking `/etc/init.d/libvirt-guests stop`. This script ends up in
invoking `virsh shutdown tails-builder-...`. And this command fails.

This is something you can verify by yourself easily. Launch the VM
(`rake vm:up`), then try various ways to stop it. Commands that allow to
stop the VM, from the higher-level to the lower-level, are:
- OK: rake vm:halt
- OK: vagrant halt 
- KO: virsh -c qemu:///system shutdown 

At the libvirt level, stopping the VM doesn't work. To start with,
anyone can confirm that, or am I the only one having this behavior ?

After a bit of digging and experimenting, it turns out that this is not
a bug, just the expected behavior. It seems that libvirt tries to
shutdown the VM by speaking to the QEmu Guest Agent, which is not
installed on the VM. So nothing happens, and the libvirt shutdown script
gets stuck there, and my laptop doesn't shutdown.

So !

I did just that, installed the QEmu guest agent on the VM, enabled it in
the Vagrantfile, and since then my life is so much better, and my laptop
shuts down reliably and peacefully. It's a real improvement to me.

Are you guys interested in enabling that upstream ? Should I open a
ticket along with the patch ? Or is there some reasons why you didn't
enable the QEmu guest agent in the first place ?

Cheers !
Arnaud



References:
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/chap-QEMU_Guest_Agent.html>

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Debian 9: Build fails consistently, name resolution fails sooner or later

2017-04-04 Thread Arnaud


On 04/04/2017 12:45 PM, anonym wrote:
> I don't know what you mean with "Tor download" -- the build system itself 
> doesn't run or use tor.

I was talking about the tor browser that is downloaded in
`config/chroot_local-hooks/10-tbb`, through a series of `curl` commands.
This part takes forever to complete. I didn't notice that it was cached
as well.

> Any way, the "offline" option is supposed to enable a true offline building 
> mode, and if a build fails due to lack of Internet access, that is a bug. It 
> relies on all interactions with the Internet being done with applications 
> that respect the http_proxy environment variable (which points to 
> apt-cacher-ng, which is configured to allow downloading and caching more than 
> just .deb:s). Specifically, `wget` respects http_proxy, and apparently so do 
> all the various mechanisms used to fetch firmwares and other files in the 
> post-install scripts of all .deb:s Tails uses.

Ok, I guess I just have to try it, I'll see how it goes and will let you
know if anything bad happens.

> If you know C, perhaps you'd want to look into implementing (optional) 
> download retrying in apt-cacher-ng? That would, of course, be the best 
> solution.

I'm comfortable with C, but not comfortable at all with these network
and proxy things ;) But it's nice that you point me in the right
direction. At least I know where to look next.

Cheers.


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Debian 9: Build fails consistently, name resolution fails sooner or later

2017-04-04 Thread Arnaud


On 04/04/2017 11:49 AM, anonym wrote:
>> So at the moment, I'm still in this situation where build fails, and I
>> retry and retry until at some point it succeeds. I didn't have much time
>> to work on that yet. I'll come back to you when I workaround this issue,
>> one way or another.
> A simple workaround for when you have cached everything needed for the build 
> (i.e. after your have successfully built *once*) is to add "offline" to the 
> TAILS_BUILD_OPTIONS environment variable. Of course, getting to the point 
> when you have everything cached will be painful, but at least you will be 
> able to rebuild as much as you like without risk of network issues.

Good idea, didn't try this option yet. I guess that it will avoid any
`apt` request, however what about the Tor download that is done with
`wget` ? This is not cached I think, and it's quite a long download, one
that fails


> Here's a crazier idea to explore:
>
> * run apt-cacher-ng on the host (base your configuration on 
> vagrant/provision/assets/acng.conf)
> * add "extproxy" to TAILS_BUILD_OPTIONS
> * set the http_proxy environment variable to point to the host's 
> apt-cacher-ng instance
> * the "offline" option doesn't work with external proxies, but you can set 
> Offline mode in apt-cacher-ng's configuration instead
> * sadly apt-cacher-ng lacks an option for retrying failed downloads, but if 
> you can find any other HTTP proxy that can, set that up as well and then 
> configure apt-cacher-ng to use it as a parent proxy itself (see the `Proxy` 
> option).

One the same line: did you guys consider having apt-cacher in another VM
? As a beginner, I'm still at the point where I modify the VM here and
there, and at some point I'm not sure anymore what I modified, and in
the end destroy the VM to start again from scratch. In these situations,
having another VM for apt-cacher would be a real delight.

> Happy hacking! Let us know if you find out something about the last point 
> (parent proxy that retries). If it improves your situation without using much 
> resources we probably want to add it to the VM's default apt-cacher-ng setup.

Yep I'll tell you as soon as something interesting happens on my side ;)

Cheers.


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Debian 9: Build fails consistently, name resolution fails sooner or later

2017-04-04 Thread Arnaud
Hi intrigeri, thanks for the feedback.

On 04/04/2017 08:42 AM, intrigeri wrote:
>
> This seems to have fallen through the cracks. TBH we're not very good
> at tracking patches sent over email, so in the future, I recommend
> filing Redmine tickets, assigned to the current release manager
> (https://tails.boum.org/contribute/calendar/), and with a patch
> attached (or better: a pointer to a branch).

No worries. Actually I didn't talk anymore about that because, well,
this patch solves nothing.

Having a build system that succeeds on a flaky network seems to be much
more than just adding a retry option to apt. You can forget about this
change, and also about the two other changes that I sent in this mail a
few weeks ago:

https://mailman.boum.org/pipermail/tails-dev/2017-March/011280.html

While these patches don't hurt, I can tell you now that they solve
nothing. And I don't want to be responsible for cluttering the build
system with useless stuff like that ;)

So at the moment, I'm still in this situation where build fails, and I
retry and retry until at some point it succeeds. I didn't have much time
to work on that yet. I'll come back to you when I workaround this issue,
one way or another. In the meantime, don't bother with that.

I'll follow your recommendation for the next issues I face.

Cheers !

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Experimenting with Tails, preferred workflow ?

2017-03-14 Thread Arnaud
anonym:
> Let me add one, which is a special case of intrigeri's last point:
>
> * start Tails and "symlink" the relevant/changed files from your Tails source 
> tree into the correct places in the filesystem of a running Tails session. So 
> first you use sshfs or libvirt filesystem shares to make your Tails sources 
> available inside the Tails guest, then you just symlink the source files to 
> the correct places. This allows for rapidly testing your changes, but won't 
> in all situations (e.g. stuff happening during early boot). YMMV.

Nice, thanks for sharing tips :)

>> Now I just read a bit about aufs and I understand better how it works
> It's a cool technology, but I don't think it's really needed. :)

Not sure to follow you. It IS actually used, that's what `mount` and
`/etc/fstab` show me. If I understand properly, it's thanks to aufs that
the rootfs is writable (in RAM).

BTW I stumbled on quite a nice article about the different filesystem
strategies used in different LiveCDs (Debian, Ubuntu, Fedora), here the
link in case you're interested:

https://major.io/2014/07/29/adventures-in-live-booting-linux-distributions/

Cheers.


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Experimenting with Tails, preferred workflow ?

2017-03-14 Thread Arnaud
intrigeri:
> AFAIK, modifying the rootfs in a persistent manner will produce very weird 
> results next time you boot
What do you mean ? Is it because of some security mechanism of Tails
that will detect my changes ?
> I personally combine two approaches, depending on the need:
>
>  * build a modified ISO image
>  * start Tails and modify files in there (it *is* writable, but of course the 
> modifications go to a ramdisk)
Thx for sharing ! I wasn't even aware that the rootfs was writable, I
thought it was just a read-only squashf filesystem. Now I just read a
bit about aufs and I understand better how it works

Cheers

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

[Tails-dev] Experimenting with Tails, preferred workflow ?

2017-03-14 Thread Arnaud
Dear Tails,

just pondering and wondering on the preferred way (if any) to experiment
and modify the Tails OS. I'd be happy to have Tails running in a VM and
WRITABLE, so that I can really play with it. This is really for
experimenting purpose.

Right now, I'm trying to run Tails in a VM in this particular fashion:
- booting the kernel over tftp
- loading the rootfs via a NFS filesystem (the rootfs having been
unsquashfs beforehand)

By doing that, I'm just recycling the workflow I used when I was working
on embedded Linux. But I'm wondering if there's a better workflow,
something maybe obvious that I missed, due to my lack of experience with
live cds.

Cheers !


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Debian 9: Build fails consistently, name resolution fails sooner or later

2017-03-12 Thread Arnaud
On 03/11/2017 08:41 PM, anonym wrote:
> Arnaud:
>> --- a/vagrant/provision/setup-tails-builder
>> +++ b/vagrant/provision/setup-tails-builder
> [...]
>> +# Configure apt to retry
>> +echo 'APT::Acquire::Retries "20";' > /etc/apt/apt.conf.d/99retries
> This will only affect provisioning, not any usage of APT during the build, 
> right? Or will it propagate into the chroot somehow?

Indeed, it is not effective in the chroot.

One way to set this setting in the chroot is to pass an option
explicitly when running `lb config` (patch attached). Another way would
be to add a file into the `config` directory, but I'm not sure exactly
which file. According to today's live-build doc, creating a file
`config/apt/apt.conf` should work. But it doesn't seem to match the
existing layout, Tails already has a directory `config/chroot_apt`,
maybe it's better to drop a file there, however I didn't try.

Anyway.

The fact is that it doesn't help me much, because apt-get **will not**
retry when it's run along `apt-cacher-ng`. I strongly suspect that it's
because apt-cacher-ng returns an error code 503 "Service Unavailable" on
failure, instead of forwarding the original error code. I got in touch
with apt-cacher-ng and let you know when I know more about that.

Cheers

---
 auto/config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/auto/config b/auto/config
index 236ac0f1b9..d0b9c1a2e3 100755
--- a/auto/config
+++ b/auto/config
@@ -58,6 +58,7 @@ perl -pi \
 # set Amnesia's general options
 $RUN_LB_CONFIG \
--verbose \
+   --apt-options "--yes -oAcquire::Retries=20" \
--apt-recommends false \
--architecture amd64 \
--backports false \
-- 
2.11.0


___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Debian 9: Build fails consistently, name resolution fails sooner or later

2017-03-11 Thread Arnaud
Hi,
>> +# Configure apt to retry +echo 'APT::Acquire::Retries "20";' >
>> /etc/apt/apt.conf.d/99retries 
> This will only affect provisioning, not any usage of APT during the
> build, right? Or will it propagate into the chroot somehow?

I'm not sure about that, actually I don't fully understand this part
yet. I didn't think about the chroot, plus I don't know how
apt-cacher-ng interact with all of that.

After applying this patch, my build succeeded, but I didn't really
witness an apt-get command failing and retrying yet. I need to do more
test. I'll check that and let you know.

> We have ticket #9430 where we considered adding this configuration
> during the build as well, but we thought that "#5926 [1] will
> magically solve 99% of this problem, so IMO I should not waste time
> trying to fix it differently here" so we didn't in the end. Apparently
> the "1%" of this problem that we ignored is to enable Tails to be
> built in regions with worse connectivity to our servers than us.
> Admittedly that ticket seemed to have a focus on these issues vs our
> infra, which indeed was solved, but now we have another reason.

Thanks for the reference, I'll take care to comment on these tickets if
needed.

Cheers !

Arnaud

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Debian 9: Build fails consistently, name resolution fails sooner or later

2017-03-11 Thread Arnaud
Hi !

I managed to build Tails, finally ! So let me share here the little
patch I ended up with, in case it can help someone. This patch deals
with the transient network problems I experienced.

Additionally, I increased the RAM of the VM to 1024M, just to be on the
safe side. I've experienced an oomkiller once (clearly seen in the logs)
during my several attempts to build Tails.

Cheers.

---
 config/chroot_local-hooks/10-tbb  | 2 +-
 vagrant/provision/setup-tails-builder | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/config/chroot_local-hooks/10-tbb
b/config/chroot_local-hooks/10-tbb
index 54e77e63be..8bf5d5c0b2 100755
--- a/config/chroot_local-hooks/10-tbb
+++ b/config/chroot_local-hooks/10-tbb
@@ -35,7 +35,7 @@ download_and_verify_files() {
 (
 cd "${destination}"
 echo "Fetching ${base_url}/${tarball} ..."
-curl --remote-name "${base_url}/${tarball}"
+curl --retry 20 --remote-name "${base_url}/${tarball}"
 )
 actual_sha256="$(sha256sum "${destination}/${tarball}" | cut
-d' ' -f1)"
 if [ "${actual_sha256}" != "${expected_sha256}" ]; then
diff --git a/vagrant/provision/setup-tails-builder
b/vagrant/provision/setup-tails-builder
index 9c6b044d49..75f84ffd87 100755
--- a/vagrant/provision/setup-tails-builder
+++ b/vagrant/provision/setup-tails-builder
@@ -18,6 +18,9 @@ fi

 rm -f /etc/apt/preferences.d/* /etc/apt/sources.list.d/*

+# Configure apt to retry
+echo 'APT::Acquire::Retries "20";' > /etc/apt/apt.conf.d/99retries
+
 # Add our builder-jessie repository for live-build, and pin it low
 echo 'deb http://deb.tails.boum.org/ builder-jessie main' >
/etc/apt/sources.list.d/tails.list
 sed -e 's/^[[:blank:]]*//' > /etc/apt/preferences.d/tails 

Re: [Tails-dev] Wiki patches, fix dead links

2017-03-10 Thread Arnaud
Good to know, thanks for sharing the info. I'll be careful to check the
Redmine next time.

Cheers


On 03/11/2017 02:22 AM, Cody Brownstein wrote:
> Thank you for the patches.
>
> The broken link to the Debian Live Manual is bug #12261 on Redmine.
>
> https://labs.riseup.net/code/issues/12261
>
> The link proposed in the patch is to a manual "for a much more
> up-to-date version of live-build than [is used for Tails]," (quoting
> anonym).
>
> (I learned this also.)
>
>
> On 03/10/2017 07:35 AM, Arnaud wrote:
>> Dear Tails,
>>
>> here comes wiki patches. BTW, what is the prefered way to contribute
>> trivial wiki patches like these ones ?
>>
>> Cheers,
>> Arnaud
>>
>> ___
>> Tails-dev mailing list
>> Tails-dev@boum.org
>> https://mailman.boum.org/listinfo/tails-dev
>> To unsubscribe from this list, send an empty email to
>> tails-dev-unsubscr...@boum.org.
>

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

[Tails-dev] [PATCH 1/2] wiki: fix link to vagrant page

2017-03-10 Thread Arnaud
Signed-off-by: Arnaud <arn...@preev.io>
---
 wiki/src/contribute/build/vagrant-setup.mdwn | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/wiki/src/contribute/build/vagrant-setup.mdwn 
b/wiki/src/contribute/build/vagrant-setup.mdwn
index c13c82f5b9..26b57d0478 100644
--- a/wiki/src/contribute/build/vagrant-setup.mdwn
+++ b/wiki/src/contribute/build/vagrant-setup.mdwn
@@ -16,7 +16,7 @@ Vagrant support files are located in the `vagrant` directory 
at the root of the
 
 This directory contains:
 
- * `Vagrantfile`: [configuration file for 
Vagrant](http://vagrantup.com/docs/vagrantfile.html],
+ * `Vagrantfile`: [configuration file for 
Vagrant](http://vagrantup.com/docs/vagrantfile/),
  * `provision/setup-tails-builder`: idempotent provisioning script that is
run inside the virtual machine upon boot,
  * `provision/assets/build-tails`: build script to be run inside the
-- 
2.11.0

___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

Re: [Tails-dev] Debian 9: Build fails consistently, name resolution fails sooner or later

2017-03-10 Thread Arnaud
Hi all,

thanks for your support !

I updated to Tails 3.0-beta2, destroyed the VM, started again, failed
again !

My build config is very 'vanilla', I didn't touch anything. So it means
that the build happens within the VM, with apt-cacher-ng enabled. The VM
has the default settings: 512M of RAM, and enough storage (I didn't find
where that one is defined btw).

However, after a lot of investigation (mostly in the wrong direction),
I'm pretty sure to know what's wrong. It's not Tails, it's not the VM,
it's not my config. It seems to be the network here in Vietnam.

So yep, I'm living in Vietnam. Overall, the network is not that bad
here, however they have a problem with the underwater cable AAG (Asia
America Gateway). Basically, from time to time the AAG stops working,
and the AAG is the main cable that connects Vietnam to the rest of the
world. What happens then is a bit unclear to me, to say the least.

Well, from a user point of view, what happens is very clear. The web
becomes a bit unreliable. Sometimes, websites are very slow to load, or
become completely unavailable for a few minutes. Amazingly, google
searches are still fast (more on that later). I've seen `apt-get update`
on my machine starting very fast, then slowing down and down, then
stalled. So I opened my `sources.list`, got rid of the
`httpredir.debian.org` url, replaced by `debian.xtdv.net` (the only
Vietnamese Debian mirror that I know) to ensure my traffic remains in
Vietnam, and now it works like a charm.

From a technical point of view, what happens is quite unclear. I played
a bit with `nmap --traceroute --script traceroute-geolocation`, and this
is a great tool ! I can see that requests to
time-based.snapshots.deb.tails.boum.org go a long way through Europe.
Requests to www.vagrantup.com go through Japan, which seems a bit more
direct. Amazingly, requests to google go directly from Vietnam to the
States. Which explains why they are still fast when everything else is
slow, and which makes me wonder if google has private undersea cables.

Anyway, I'm no expert, and I'm not sure how to interpret these results.
I have no idea how it will look like when I run the same traceroute
tomorrow. So let's get back to Tails.

The build process involves downloading a lot of data. Most of it from
Tails server, and I think the problem here is just that the connectivity
in Vietnam right now is way too bad to handle that. Somehow it always
fails sooner or later, it timeouts or something.

From my understanding, I can change the Debian mirror used for
provisioning the VM. But when it comes to build the Tails iso, I have no
choice but to download the packages from the Tails mirror, right ? Same
goes for TorBrowser ?

Right now I'm working on tweaking the build system, and adding retries
here and there, so that the build keeps going and doesn't give up so
easily. I think that `apt-cacher-ng` should help me to mitigate the
problem, but up to now I destroyed the VM too often to take advantage of
it ;)

I'll keep trying and let you know how it goes.

Cheers !



If you were interested in all this conversation (a bit off-topic), you
might find the following links interesting:

http://www.submarinecablemap.com

https://www.citypassguide.com/en/travel/ho-chi-minh-city/daily/news/neverending-story-vietnam-and-the-undersea-internet-cable-ruptures





___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.

[Tails-dev] Debian 9: Build fails consistently, name resolution fails sooner or later

2017-03-08 Thread Arnaud
Dear Tails,

first, let me quickly introduce myself, I'm Arnaud, part of the
SilentKeys team. Please refer to this mail to remember about SilentKeys
(it was quite a while ago):
https://mailman.boum.org/pipermail/tails-project/2016-April/000488.html

I've been trying to build a Tails image an didn't succeed yet. My
machine is running Debian 9.

As for Tails, I checked out the tag '3.0-beta1', and that's what I'm
trying to build at the moment.

So far, the build always fails sooner or later because of network
failure, and most precisely name resolution failure. A typical log looks
like that:

Err http://time-based.snapshots.deb.tails.boum.org sid Release.gpg
Could not resolve 'time-based.snapshots.deb.tails.boum.org'

The failure doesn't always happen at the same point of the build. At
first, I thought it was related to `apt`, but I also experienced in
failure on a `curl` command (when downloading Tor Browser, in
`config/chroot_local-hooks/10-tbb`). I don't have the log anymore, but
it was also a name resolution failure.

I also experienced another problem at provisionning: sometimes the call
`/usr/lib/apt-cacher-ng/expire-caller.pl` (from `setup-tails-builder`)
hangs. It might be related or not, I'm not sure.

I don't think the problem is my internet connectivity. I have a good
connection and never experience this kind of problem in my daily use. I
tried a little test like that in the VM for a while.

while true; do wget -O /dev/null -nv
time-based.snapshots.deb.tails.boum.org; sleep 5; done

I didn't see any failure.

I had some doubt about `apt-cacher-ng`, so I ran a build without proxy.

TAILS_BUILD_OPTIONS=noproxy rake build

It failed all the same.

Now I'm a bit stuck. I'm wondering if I should dig on the `libvirt`
side, but I don't find anything related on the net. I'm wondering if I'm
missing something obvious.

If anyone already experienced this kind of error, or have any advice,
that would be great.

Cheers,
Arnaud
___
Tails-dev mailing list
Tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.