Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-12 Thread Dale
Frank Steinmetzger wrote:
> Am Thu, Oct 12, 2023 at 10:44:39PM +0100 schrieb Michael:
>
>> Why don't you test throughput without encryption to confirm your assumption?
> What does `cryptsetup benchmark` say? I used to use a Celeron G1840 in my 
> NAS, which is Intel Haswell without AES_NI. It was able to do ~ 150 MB/s raw 
> encryption throughput when transferring to or from a LUKS’ed image in a 
> ramdisk, so almost 150 % of gigabit ethernet speed.

When I first set up the old 770T system, I did that.  It was faster with
no encryption on the 770T end but I did have encryption on my main rig's
end.  The difference was a pretty good bit.  Pretty much all my stuff is
encrypted.  Anyway, I was still using the old mount options and it was
still faster. 

I've never used that benchmark.  Didn't know it exists.  This is the
results.  Keep in mind, fireball is my main rig.  The FX-8350 thingy. 
The NAS is currently the old 770T system.  Sometimes it is a old Dell
Inspiron but not this time.  ;-)



root@fireball / # cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1   878204 iterations per second for 256-bit key
PBKDF2-sha256 911805 iterations per second for 256-bit key
PBKDF2-sha512 698119 iterations per second for 256-bit key
PBKDF2-ripemd160  548418 iterations per second for 256-bit key
PBKDF2-whirlpool  299251 iterations per second for 256-bit key
argon2i   4 iterations, 1048576 memory, 4 parallel threads (CPUs)
for 256-bit key (requested 2000 ms time)
argon2id  4 iterations, 1048576 memory, 4 parallel threads (CPUs)
for 256-bit key (requested 2000 ms time)
# Algorithm |   Key |  Encryption |  Decryption
    aes-cbc    128b    63.8 MiB/s    51.4 MiB/s
    serpent-cbc    128b    90.9 MiB/s   307.6 MiB/s
    twofish-cbc    128b   200.4 MiB/s   218.4 MiB/s
    aes-cbc    256b    54.6 MiB/s    37.5 MiB/s
    serpent-cbc    256b    90.4 MiB/s   302.6 MiB/s
    twofish-cbc    256b   198.2 MiB/s   216.7 MiB/s
    aes-xts    256b    68.0 MiB/s    45.0 MiB/s
    serpent-xts    256b   231.9 MiB/s   227.6 MiB/s
    twofish-xts    256b   191.8 MiB/s   163.1 MiB/s
    aes-xts    512b    42.4 MiB/s    18.9 MiB/s
    serpent-xts    512b   100.9 MiB/s   124.6 MiB/s
    twofish-xts    512b   154.8 MiB/s   173.3 MiB/s
root@fireball / #



root@nas:~# cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1   741567 iterations per second for 256-bit key
PBKDF2-sha256 910222 iterations per second for 256-bit key
PBKDF2-sha512 781353 iterations per second for 256-bit key
PBKDF2-ripemd160  547845 iterations per second for 256-bit key
PBKDF2-whirlpool  350929 iterations per second for 256-bit key
argon2i   4 iterations, 571787 memory, 4 parallel threads (CPUs) for
256-bit key (requested 2000 ms time)
argon2id  4 iterations, 524288 memory, 4 parallel threads (CPUs) for
256-bit key (requested 2000 ms time)
# Algorithm |   Key |  Encryption |  Decryption
    aes-cbc    128b   130.6 MiB/s   128.0 MiB/s
    serpent-cbc    128b    64.7 MiB/s   161.8 MiB/s
    twofish-cbc    128b   175.4 MiB/s   218.8 MiB/s
    aes-cbc    256b   120.1 MiB/s   122.2 MiB/s
    serpent-cbc    256b    84.5 MiB/s   210.8 MiB/s
    twofish-cbc    256b   189.5 MiB/s   218.6 MiB/s
    aes-xts    256b   167.0 MiB/s   162.1 MiB/s
    serpent-xts    256b   173.9 MiB/s   204.5 MiB/s
    twofish-xts    256b   204.4 MiB/s   213.2 MiB/s
    aes-xts    512b   127.9 MiB/s   122.9 MiB/s
    serpent-xts    512b   201.5 MiB/s   204.7 MiB/s
    twofish-xts    512b   215.0 MiB/s   213.0 MiB/s
root@nas:~#



Is that about what you would expect?  Fireball is on a 970 mobo.  It's
slightly newer.  I think the 770T is about 2 years older, maybe 3. 


 If you're copying over the network, that will be the limiting factor.
>>> Someone posted some extra options to mount with and add to exports
>>> file.
> Ah right, you use NFS. If not, I’d have suggested not to use rsync over ssh, 
> because that would indeed introduce a lot of encryption overhead.
>

I thought nfs was the proper way.  I use ssh and I use rsync,
separately.  Didn't know they can be used together tho. 


>>> I still think encryption is slowing it down some.  As you say tho,
>>> ethernet isn't helping which is why I may look into other options later,
>>> faster ethernet or fiber if I can find something cheap enough. 
>> There are a lot of hypotheses in your statements, but not much testing to 
>> prove or disprove any of them.
>>
>> Why don't you try to isolate the cause by testing one system element at a 
>> time 
>> and see what results you get.
>> […]
>> 

Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-12 Thread Frank Steinmetzger
Am Thu, Oct 12, 2023 at 10:44:39PM +0100 schrieb Michael:

> >  It only does this when I'm copying files over.  Right now I'm copying
> >  about 26TBs of data over ethernet and it is taking a while.  Once I
> >  stop it or it finishes the copy, the CPU goes to about nothing,
> >  unless I'm doing something else.  So it has something to do with the
> >  copy process.
> > >>> 
> > >>> Or the network. What are you using to copy? If you use rsync, you can
> > >>> make use the the --bwlimit option to reduce the speed and network
> > >>> load.
> > >> 
> > >> Reduce?  I wouldn't complain if it went faster.  I think it is about as
> > >> fast as it is going to get tho.
> > > 
> > > And that may be contributing to the CPU usage. Slowing down the flow may
> > > make the comouter more usable, and you're never going to copy 26TB
> > > quickly, especially over ethernet.
> > > 
> > >> While I'm not sure what is keeping me from copying as fast as the drives
> > >> themselves can go, I suspect it is the encryption.
> 
> Why don't you test throughput without encryption to confirm your assumption?

What does `cryptsetup benchmark` say? I used to use a Celeron G1840 in my 
NAS, which is Intel Haswell without AES_NI. It was able to do ~ 150 MB/s raw 
encryption throughput when transferring to or from a LUKS’ed image in a 
ramdisk, so almost 150 % of gigabit ethernet speed.

> > > If you're copying over the network, that will be the limiting factor.
> > 
> > Someone posted some extra options to mount with and add to exports
> > file.

Ah right, you use NFS. If not, I’d have suggested not to use rsync over ssh, 
because that would indeed introduce a lot of encryption overhead.

> > I still think encryption is slowing it down some.  As you say tho,
> > ethernet isn't helping which is why I may look into other options later,
> > faster ethernet or fiber if I can find something cheap enough. 
> 
> There are a lot of hypotheses in your statements, but not much testing to 
> prove or disprove any of them.
> 
> Why don't you try to isolate the cause by testing one system element at a 
> time 
> and see what results you get.
> […]
> Unless you're running Pentium 4 or some other old CPU, it is almost certain 
> your CPU is capable of using AES-NI to offload to hardware some/all of the 
> encryption/decryption load - as long as you have the crypto module built in 
> your kernel.

The FX-8350 may be old, but it actually does have AES instructions.

Here is my Haswell i5 (only two years younger than the FX) with AES_NI:

~ LC_ALL=C cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1  1323959 iterations per second for 256-bit key
PBKDF2-sha2561724631 iterations per second for 256-bit key
PBKDF2-sha5121137284 iterations per second for 256-bit key
PBKDF2-ripemd160  706587 iterations per second for 256-bit key
PBKDF2-whirlpool  510007 iterations per second for 256-bit key
argon2i   7 iterations, 1048576 memory, 4 parallel threads (CPUs) for 
256-bit key (requested 2000 ms time)
argon2id  7 iterations, 1048576 memory, 4 parallel threads (CPUs) for 
256-bit key (requested 2000 ms time)
# Algorithm |   Key |  Encryption |  Decryption
aes-cbc128b   679.8 MiB/s  2787.0 MiB/s
serpent-cbc128b91.4 MiB/s   582.1 MiB/s
twofish-cbc128b   194.9 MiB/s   368.3 MiB/s
aes-cbc256b   502.3 MiB/s  2155.4 MiB/s
serpent-cbc256b90.3 MiB/s   582.5 MiB/s
twofish-cbc256b   194.0 MiB/s   368.6 MiB/s
aes-xts256b  2470.8 MiB/s  2478.7 MiB/s
serpent-xts256b   537.4 MiB/s   526.1 MiB/s
twofish-xts256b   347.3 MiB/s   347.3 MiB/s
aes-xts512b  1932.6 MiB/s  1958.0 MiB/s
serpent-xts512b   532.9 MiB/s   522.9 MiB/s
twofish-xts512b   348.4 MiB/s   348.9 MiB/s

The 6 Watts processor in my Surface Go yields:
aes-xts512b  1122,2 MiB/s  1123,7 MiB/s

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The severity of the itch is inversely proportional to the reach.


signature.asc
Description: PGP signature


Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-12 Thread Michael
On Thursday, 12 October 2023 21:50:28 BST Dale wrote:
> Neil Bothwick wrote:
> > On Thu, 12 Oct 2023 07:36:17 -0500, Dale wrote:
> >> Neil Bothwick wrote:
> >>> On Wed, 11 Oct 2023 17:52:58 -0500, Dale wrote:
>  It only does this when I'm copying files over.  Right now I'm copying
>  about 26TBs of data over ethernet and it is taking a while.  Once I
>  stop it or it finishes the copy, the CPU goes to about nothing,
>  unless I'm doing something else.  So it has something to do with the
>  copy process.
> >>> 
> >>> Or the network. What are you using to copy? If you use rsync, you can
> >>> make use the the --bwlimit option to reduce the speed and network
> >>> load.
> >> 
> >> Reduce?  I wouldn't complain if it went faster.  I think it is about as
> >> fast as it is going to get tho.
> > 
> > And that may be contributing to the CPU usage. Slowing down the flow may
> > make the comouter more usable, and you're never going to copy 26TB
> > quickly, especially over ethernet.
> > 
> >> While I'm not sure what is keeping me from copying as fast as the drives
> >> themselves can go, I suspect it is the encryption.

Why don't you test throughput without encryption to confirm your assumption?


> > If you're copying over the network, that will be the limiting factor.
> 
> Someone posted some extra options to mount with and add to exports
> file.  Those added options almost doubled the speed.  I watch gkrellm
> and I think it is going about as fast as it can.  My problem is, some
> software uses one unit to measure things while another uses something
> else.  It makes it hard to figure out what is doing what.  Still, using
> gkrellm which is something I'm used to watching when it comes to drive
> read/write data, I think it is as good as it is going to get.  Not that
> I'm not open to trying other options that might speed things up.  I
> still think encryption is slowing it down some.  As you say tho,
> ethernet isn't helping which is why I may look into other options later,
> faster ethernet or fiber if I can find something cheap enough. 

There are a lot of hypotheses in your statements, but not much testing to 
prove or disprove any of them.

Why don't you try to isolate the cause by testing one system element at a time 
and see what results you get.

Copy a large file from tmpfs to tmpfs to see how fast it can transfer across 
your LAN - or use iperf3 as already recommended.  Use a file size large enough 
to saturate your network and give you a real life max throughput.

Repeat, but this time copy the large file over to disk.

Repeat, but this time try different filesystems, disks, volumes, strides/
stripes, add encryption, compression, whatnot.

You may spend an hour or two, but you'd soon isolate the major contributing 
factor(s) causing the observed slowdown.

Unless you're running Pentium 4 or some other old CPU, it is almost certain 
your CPU is capable of using AES-NI to offload to hardware some/all of the 
encryption/decryption load - as long as you have the crypto module built in 
your kernel.

PS. Keep notes and flush caches between tests to avoid drawing conclusions on 
spurious results.

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-12 Thread Dale
Neil Bothwick wrote:
> On Thu, 12 Oct 2023 07:36:17 -0500, Dale wrote:
>
>> Neil Bothwick wrote:
>>> On Wed, 11 Oct 2023 17:52:58 -0500, Dale wrote:
>>>  
 It only does this when I'm copying files over.  Right now I'm copying
 about 26TBs of data over ethernet and it is taking a while.  Once I
 stop it or it finishes the copy, the CPU goes to about nothing,
 unless I'm doing something else.  So it has something to do with the
 copy process.  
>>> Or the network. What are you using to copy? If you use rsync, you can
>>> make use the the --bwlimit option to reduce the speed and network
>>> load.
>>>
>>>  
>>
>> Reduce?  I wouldn't complain if it went faster.  I think it is about as
>> fast as it is going to get tho.
> And that may be contributing to the CPU usage. Slowing down the flow may
> make the comouter more usable, and you're never going to copy 26TB
> quickly, especially over ethernet.
>
>> While I'm not sure what is keeping me from copying as fast as the drives
>> themselves can go, I suspect it is the encryption.
> If you're copying over the network, that will be the limiting factor.
>
>


Someone posted some extra options to mount with and add to exports
file.  Those added options almost doubled the speed.  I watch gkrellm
and I think it is going about as fast as it can.  My problem is, some
software uses one unit to measure things while another uses something
else.  It makes it hard to figure out what is doing what.  Still, using
gkrellm which is something I'm used to watching when it comes to drive
read/write data, I think it is as good as it is going to get.  Not that
I'm not open to trying other options that might speed things up.  I
still think encryption is slowing it down some.  As you say tho,
ethernet isn't helping which is why I may look into other options later,
faster ethernet or fiber if I can find something cheap enough. 

My new CPU cooler is on it's way.  I'm picking up the smaller stuff at
times.  I couldn't pass up a good deal on that CPU cooler.  The big
spend is CPU, mobo and memory.  Oh, that MASSIVE case too.  O_O 

Dale

:-)  :-) 



Re: [gentoo-user] How to replay a backup system?

2023-10-12 Thread Helmut Jarausch

On 10/12/2023 07:09:35 PM, Neil Bothwick wrote:

On Thu, 12 Oct 2023 18:54:10 +0200, Helmut Jarausch wrote:

> from time to time - as was the case a few days ago - Gentoo updates
> lead to an unbootable system.
> I backup my system each day - using BTRFS snapshots.
>
> Now, only a few files on current system have changed; the rest of  
the

> 400 GB root partition is unchanged.
> Therefore I only have to replace these newer files by the versions
> saved a day before.
>
> How can this be done efficiently? Unfortunately AFAIK rsync doesn't
> have an option to copy only files which are
> NEWER on the destination than the corresponding files in the backup.

You can replace the original root subvolume with the snapshot. One  
way of

doiing it is detailed at
https://unix.stackexchange.com/questions/19211/how-to-create-a-snapshot-in-btrfs-and-then-rollback-to-it-after-some-work




Thanks Neil,

unfortunately my root FS is of type ext4, only my backup FS is of type  
BTRFS


Helmut




Re: [gentoo-user] How to replay a backup system?

2023-10-12 Thread Neil Bothwick
On Thu, 12 Oct 2023 18:54:10 +0200, Helmut Jarausch wrote:

> from time to time - as was the case a few days ago - Gentoo updates  
> lead to an unbootable system.
> I backup my system each day - using BTRFS snapshots.
> 
> Now, only a few files on current system have changed; the rest of the  
> 400 GB root partition is unchanged.
> Therefore I only have to replace these newer files by the versions  
> saved a day before.
> 
> How can this be done efficiently? Unfortunately AFAIK rsync doesn't  
> have an option to copy only files which are
> NEWER on the destination than the corresponding files in the backup.

You can replace the original root subvolume with the snapshot. One way of
doiing it is detailed at
https://unix.stackexchange.com/questions/19211/how-to-create-a-snapshot-in-btrfs-and-then-rollback-to-it-after-some-work


-- 
Neil Bothwick

Bang on the LEFT side of your computer to restart Windows


pgpmRa5BmY4at.pgp
Description: OpenPGP digital signature


[gentoo-user] How to replay a backup system?

2023-10-12 Thread Helmut Jarausch

Hi,
from time to time - as was the case a few days ago - Gentoo updates  
lead to an unbootable system.

I backup my system each day - using BTRFS snapshots.

Now, only a few files on current system have changed; the rest of the  
400 GB root partition is unchanged.
Therefore I only have to replace these newer files by the versions  
saved a day before.


How can this be done efficiently? Unfortunately AFAIK rsync doesn't  
have an option to copy only files which are

NEWER on the destination than the corresponding files in the backup.

Many thanks for a hint,
Helmut



Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-12 Thread Neil Bothwick
On Thu, 12 Oct 2023 07:36:17 -0500, Dale wrote:

> Neil Bothwick wrote:
> > On Wed, 11 Oct 2023 17:52:58 -0500, Dale wrote:
> >  
> >> It only does this when I'm copying files over.  Right now I'm copying
> >> about 26TBs of data over ethernet and it is taking a while.  Once I
> >> stop it or it finishes the copy, the CPU goes to about nothing,
> >> unless I'm doing something else.  So it has something to do with the
> >> copy process.  
> > Or the network. What are you using to copy? If you use rsync, you can
> > make use the the --bwlimit option to reduce the speed and network
> > load.
> >
> >  
> 
> 
> Reduce?  I wouldn't complain if it went faster.  I think it is about as
> fast as it is going to get tho.

And that may be contributing to the CPU usage. Slowing down the flow may
make the comouter more usable, and you're never going to copy 26TB
quickly, especially over ethernet.

> While I'm not sure what is keeping me from copying as fast as the drives
> themselves can go, I suspect it is the encryption.

If you're copying over the network, that will be the limiting factor.


-- 
Neil Bothwick

Nixon's Principal: If 2 wrongs don't make a right, try 3.


pgpv3iDV0bfu7.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Re: world updates blocked by Qt

2023-10-12 Thread Michael Cook

On 10/12/23 06:56, Alan McKinnon wrote:
On Thu, Oct 12, 2023 at 12:19 PM Nikos Chantziaras  
wrote:


On 11/10/2023 21:14, Alan McKinnon wrote:
> On Wed, Oct 11, 2023 at 4:49 PM Michael Cook  > wrote:
>     I just --backtrack=100 and walked away, seemed to have figured
>     something out for my system and updated normally.
>
> This is the one that solved it. Been away too long, forgot all
about
> backtrack

I've had this in my make.conf for many years now:

   EMERGE_DEFAULT_OPTS="--backtrack=200"

Never hit the issue you described (KDE desktop, thus Qt is always
a dep.)


Added similar here now. I see the default is 10, obviously that is not 
enough when a big Qt drop hits.


I have something like 30 Qt-5 packages! When did it get so big? I 
recall building Qt4 and it was about 6 or so.

Perhaps the devs split it up into many smaller packages.

Alan

--
Alan McKinnon
alan dot mckinnon at gmail dot com


I built this computer in 2018, been using KDE for my desktop and it's 
always been on ~amd64. I've had to increase backtrack like twice to 
resolve some Qt upgrades, not worth upping the default, would not save time.


Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-12 Thread Dale
Neil Bothwick wrote:
> On Wed, 11 Oct 2023 17:52:58 -0500, Dale wrote:
>
>> It only does this when I'm copying files over.  Right now I'm copying
>> about 26TBs of data over ethernet and it is taking a while.  Once I stop
>> it or it finishes the copy, the CPU goes to about nothing, unless I'm
>> doing something else.  So it has something to do with the copy process.
> Or the network. What are you using to copy? If you use rsync, you can
> make use the the --bwlimit option to reduce the speed and network load.
>
>


Reduce?  I wouldn't complain if it went faster.  I think it is about as
fast as it is going to get tho.  I may one day try to get a fiber card
to go between my computer and some sort of NAS box or something.  I'm
slowly building up my number of rigs so I can separate some things
around.  I got my old Gigabyte 770T running but no case yet.  I use it
for my backups instead of the old Dell Inspiron.  Actually, I found a
good deal on a CPU cooler last night for the new build and bought it. 
About half price or so.  Reviews claim it is a really nice cooler. 
Fairly good size with two fans. 

While I'm not sure what is keeping me from copying as fast as the drives
themselves can go, I suspect it is the encryption.  I think for these
old CPUs, it uses more CPU time.  Htop with the meters shows the CPU
doing something but not exactly what.  I'm pretty sure it is either
encryption, LVM or both. 

Now it's time for my weekly pokings at the Doctor and my weekly grocery
shopping.  ;-)

Dale

:-)  :-) 



Re: [gentoo-user] Re: world updates blocked by Qt

2023-10-12 Thread Alan McKinnon
On Thu, Oct 12, 2023 at 12:19 PM Nikos Chantziaras  wrote:

> On 11/10/2023 21:14, Alan McKinnon wrote:
> > On Wed, Oct 11, 2023 at 4:49 PM Michael Cook  > > wrote:
> > I just --backtrack=100 and walked away, seemed to have figured
> > something out for my system and updated normally.
> >
> > This is the one that solved it. Been away too long, forgot all about
> > backtrack
>
> I've had this in my make.conf for many years now:
>
>EMERGE_DEFAULT_OPTS="--backtrack=200"
>
> Never hit the issue you described (KDE desktop, thus Qt is always a dep.)
>
>
> Added similar here now. I see the default is 10, obviously that is not
enough when a big Qt drop hits.

I have something like 30 Qt-5 packages! When did it get so big? I recall
building Qt4 and it was about 6 or so.
Perhaps the devs split it up into many smaller packages.

Alan

-- 
Alan McKinnon
alan dot mckinnon at gmail dot com


[gentoo-user] Re: world updates blocked by Qt

2023-10-12 Thread Nikos Chantziaras

On 11/10/2023 21:14, Alan McKinnon wrote:
On Wed, Oct 11, 2023 at 4:49 PM Michael Cook > wrote:

I just --backtrack=100 and walked away, seemed to have figured
something out for my system and updated normally.

This is the one that solved it. Been away too long, forgot all about 
backtrack


I've had this in my make.conf for many years now:

  EMERGE_DEFAULT_OPTS="--backtrack=200"

Never hit the issue you described (KDE desktop, thus Qt is always a dep.)




Re: [gentoo-user] Getting output of a program running in background after a crash

2023-10-12 Thread Neil Bothwick
On Wed, 11 Oct 2023 17:52:58 -0500, Dale wrote:

> It only does this when I'm copying files over.  Right now I'm copying
> about 26TBs of data over ethernet and it is taking a while.  Once I stop
> it or it finishes the copy, the CPU goes to about nothing, unless I'm
> doing something else.  So it has something to do with the copy process.

Or the network. What are you using to copy? If you use rsync, you can
make use the the --bwlimit option to reduce the speed and network load.


-- 
Neil Bothwick

Is that "woof" feed me; "woof" walk me; "woof" there's a burglar? What??


pgpYMkd7E9Si0.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] world updates blocked by Qt

2023-10-12 Thread Wols Lists

On 11/10/2023 17:44, Philip Webb wrote:

231011 Alan McKinnon wrote:

Today a sync and emerge world produces a huge list of blockers.
qt 5.15.10 is currently installed and qt 5.15.11 is new in the tree and
being blocked.
All the visible blockers are Qt itself so --verbose-conflicts is needed.


My experience for some time has been that Qt pkgs block one another,
st the only way out is to unmerge them all, then remerge them all.
If anyone knows a better method, please let us know

I haven't had that in a long time. If I get blocks like that (rare) 
--backtrack=100 (or whatever it is) unusually unblocks it.


The other thing is, I don't have any explicit perl code on my system, 
but on at least one occasion running perl-cleaner --all unbunged a 
problem ...


There's a whole bunch of incantations that are rarely needed but need to 
be remembered for when they are ...


Cheers,
Wol