Public bug reported:

On the server image on an RPi3 B+, I run this script:

```
#!/bin/bash -eu
mount_and_read()
{
    local snap=$1
    local mnt_d=$2
    mount "$snap" "$mnt_d"

    # We should not use /dev/null here, see
    # 
https://unix.stackexchange.com/questions/512362/why-does-tar-appear-to-skip-file-contents-when-output-file-is-dev-null
    tar -cO "$mnt_d"/ > /dev/zeros
}

snap_d=$(mktemp -d)
for i in {1..3}; do
    sync && echo 3 > /proc/sys/vm/drop_caches
    time mount_and_read "$1" "$snap_d"
    umount "$snap_d"
done
rmdir "$snap_d"
```

With these commands:
$ sudo mount -t tmpfs -o size=400M tmpfs tmpfs/
$ cd tmpfs
$ snap download core22
$ sudo ../measure-snap-mnt.sh core22_169.snap

I get these results on 20.04:
real    0m21.667s
user    0m0.302s
sys     0m20.299s

real    0m21.650s
user    0m0.351s
sys     0m20.417s

real    0m21.718s
user    0m0.285s
sys     0m20.528s

On 22.04 server image:
real    0m25.084s
user    0m0.407s
sys     0m23.778s

real    0m25.064s
user    0m0.397s
sys     0m23.768s

real    0m25.051s
user    0m0.369s
sys     0m23.803s

Which is a considerable difference. This affects snaps compressed with
xz, and although these measurements are for server the bigger impact is
on UC images, where the boot time increases considerably because of
this.

** Affects: linux-raspi (Ubuntu)
     Importance: Undecided
         Status: New

** Summary changed:

- Accessing squashfs with lzma compression is ~16% slower in 5.15 when compared 
to 5.4
+ Accessing squashfs with lzma compression is ~16% slower in 5.15 when compared 
to 5.4, on an RPi3 B+

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-raspi in Ubuntu.
https://bugs.launchpad.net/bugs/1975688

Title:
  Accessing squashfs with lzma compression is ~16% slower in 5.15 when
  compared to 5.4, on an RPi3 B+

Status in linux-raspi package in Ubuntu:
  New

Bug description:
  On the server image on an RPi3 B+, I run this script:

  ```
  #!/bin/bash -eu
  mount_and_read()
  {
      local snap=$1
      local mnt_d=$2
      mount "$snap" "$mnt_d"

      # We should not use /dev/null here, see
      # 
https://unix.stackexchange.com/questions/512362/why-does-tar-appear-to-skip-file-contents-when-output-file-is-dev-null
      tar -cO "$mnt_d"/ > /dev/zeros
  }

  snap_d=$(mktemp -d)
  for i in {1..3}; do
      sync && echo 3 > /proc/sys/vm/drop_caches
      time mount_and_read "$1" "$snap_d"
      umount "$snap_d"
  done
  rmdir "$snap_d"
  ```

  With these commands:
  $ sudo mount -t tmpfs -o size=400M tmpfs tmpfs/
  $ cd tmpfs
  $ snap download core22
  $ sudo ../measure-snap-mnt.sh core22_169.snap

  I get these results on 20.04:
  real  0m21.667s
  user  0m0.302s
  sys   0m20.299s

  real  0m21.650s
  user  0m0.351s
  sys   0m20.417s

  real  0m21.718s
  user  0m0.285s
  sys   0m20.528s

  On 22.04 server image:
  real  0m25.084s
  user  0m0.407s
  sys   0m23.778s

  real  0m25.064s
  user  0m0.397s
  sys   0m23.768s

  real  0m25.051s
  user  0m0.369s
  sys   0m23.803s

  Which is a considerable difference. This affects snaps compressed with
  xz, and although these measurements are for server the bigger impact
  is on UC images, where the boot time increases considerably because of
  this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1975688/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to