08/10/12 19:35, Ague Mill wrote:
> Hi!
> 
> I experimented with yet another approach to improve the situation of our
> memory wiping mechanism. Maybe all we needed to fix the current process
> was 0f1f476d, but well...
> 
> So, here it is, in the `feature/hugetlb_mem_wipe` branch. It keeps a
> Linux+initramfs+userland program approach, but it does so with a little
> hand-crafted C program.
> 
> That piece of software uses mmap and hugetlb and some Linux vm tricks to
> wipe as much as possible. And for an added bonus, with a progress bar.
> 
> See the commit message for more details.
> 
> If have successfully tested that code in a VM with more than 4 GB memory
> and it looks like it works. I was not able to properly analyze the
> memory with that much bytes, though.
> 
> I'll be happy if someone could do so more testing in >= 4 GB conditions
> as I am lacking the necessary hardware at the moment. I'd be interested
> in knowing how this branch compares with the current state of devel,
> both in time and on how much memory is actually overwritten.

I've now benchmarked current devel vs. the feature/hugetlb_mem_wipe
branch. I was done in a 64-bit VM with 8 GB of RAM. In each test I
verified that the memory was completely filled with the pattern before
wiping.

* feature/hugetlb_mem_wipe:

  - With PAE kernel:
    * Patterns remaining after wipe: ~39K ≃ 600 KiB of memory
    * Time required for wipe: 2.5 seconds.

  - With "normal" non-PAE kernel:
    * Patterns remaining after wipe: 51K ≃ 800 KiB of memory. Also, in
      this case hugetlb_mem_wipe exits at 51% progress with the
      following error:

        wipe_page: Cannot allocate memory
        spawn_new failed (1)

      OTOH, since only 51K patterns remains, the progress meter seems
      to be wrong (I suppose it's just a buffer issue) and that it in
      fact dies on around 99% progress.
    * Time required for wipe: ~1 second.

  - User feedback: The progress bar is nice, but sometimes it was put
    on top of some other text which made it hard to read.

* devel (many `sdmem` in parallel thanks to 0f1f476d):

  - With PAE kernel:
    * Patterns remaining after wipe: 0 (!)
    * Time required for wipe: 8 seconds.

  - With "normal" non-PAE kernel:
    * Patterns remaining after wipe: 900K ≃ 14 MiB of memory
    * Time required for wipe: 4 seconds.

  - User feedback: Lots of stars everywhere. It's messy.

These are pretty interesting results. And embarrassing; running "many
sdmem instances at once" was implemented in commit 180f058 a year ago
next monday. I wonder why we didn't investigate further why that didn't
solve the issue. Oh well...

> Provided a little more feedback, this could go in 0.14. We can always
> revert if rc1 proves it deficient.

Given that:

* current devel cleans *all* memory in the most common case (PAE
  kernel), and that it does so without taking very much more time, and
* I'm unsure what the implications are of hugetlb_mem_wipe exiting with
  that error on a non-PAE kernel,

I'd rather wait with merging feature/hugetlb_mem_wipe until after Tails
0.14.

Cheers!

_______________________________________________
tails-dev mailing list
tails-dev@boum.org
https://mailman.boum.org/listinfo/tails-dev

Reply via email to