Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Dale
Frank Steinmetzger wrote:
> Am Mon, Sep 18, 2023 at 06:40:52PM -0500 schrieb Dale:
>
 I tend to need quite a few PCIe slots.  I like to have my own video
 card.  I never liked the built in ones.
>>> You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may 
>>> have asked that before).
>>>
>>> I get it when you wanna do it your way because it always worked™ (which is 
>>> not wrong — don’t misunderstand me) and perhaps you had some bad experience 
>>> in the past. OTOH it’s a pricey component usually only needed by gamers and 
>>> number crunchers. On-board graphics are just fine for Desktop and even 
>>> (very) light gaming and they lower power draw considerably. Give it a 
>>> swirl, 
>>> maybe you like it. :) Both Intel and AMD work just fine with the kernel 
>>> drivers.
>> Well, for one, I usually upgrade the video card several times before I
>> upgrade the mobo.  When it is built in, not a option.  I think I'm on my
>> third in this rig.
>>
>> I also need multiple outputs, two at least.
> That is not a problem with iGPUs. The only thing to consider is the type of 
> video connectors on the board. Most have two classical ones, some three, 
> divided among HDMI and DP. And the fancy ones use USB-C with DisplayPort 
> alternative mode. Also, dGPUs draw a lot more when using two displays.
>

They have added a lot of stuff to mobos since I bought one about a
decade ago.  Maybe things have improved.  I just like PCIe slots and
cards.  Gives me more options.  Given how things have changed tho, I may
have to give in on some things.  I just like my mobos to be like Linux. 
Have something do one thing and do it well.  When needed, change that
thing.  ;-) 


>> One for
>> monitor and one for TV.  My little NAS box I'm currently using is a Dell
>> something.  The video works but it has no GUI.  At times during the boot
>> up process, things don't scroll up the screen.  I may be missing a
>> setting somewhere but when it blanks out, it comes back with a different
>> resolution and font size.
> In case you use Grub, it has an option to keep the UEFI video mode.
> So there would be no switching if UEFI already starts with the proper 
> resolution.

That rig is old.  Maybe 10 or 15 years old.  No UEFI on it.  Does use
grub tho.  I duckduckgo'd it and changed some settings but last time I
booted, it did all that blinky, blank stuff.  Sometimes, I wonder if it
is hung up or crashed.  Then it pops up again and lets me know it is
still booting.  Eventually, I'll remove the monitor completely.  Then it
either boots up or it doesn't.  I just ssh in, decrypt the drives, then
mount from my main rig and start my backups.  I might add, this new
setup with LVM, the backups started at about the end of a previous
thread last Wednesday I think.  It's still copying data to the new
backup.  It's up the files starting with a "M".  The ones starting with
"The" is pretty big.  It's gonna take a while.  Poor drives.  o_O


>> My Gentoo box doesn't do that.  I can see the screen from BIOS all the
>> way to when it finishes booting and the GUI comes up.  I'm one of those
>> who watches.  ;-)
> Yeah, and it’s neat if there is no flickering or blanking. So modern and 
> clean.
>
 Figure the case is a
 good place to start.  Mobo, CPU and such next.  Figure mobo will pick
 memory for me since usually only one or two will work anyway. 
>>> One or two what?
>> One or two types of memory.  Usually, plain or ECC.  Mobos usually are
>> usually pretty picky on their memory. 
> Hm… while I haven’t used that many different components in my life, so far 
> I have not had a system not accept any RAM. Just stick to the big names, I 
> guess.

I think one of my rigs uses DDR, I think my main rig is DDR3.  I noticed
they are up to DDR5 now.  What I meant was if a mobo requires DDR4, that
is usually all it will take.  Nothing else will work.  Whatever the mobo
requires is what you use, just pick a good brand as you say. 


 Since no one mentioned a better case, that Define thing may end up being
 it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
 I bought my current case, which has space for five 3.5" and six 5 1/4"
 drives, I thought I'd never fill up just the 3.5" ones.  Now, the 3.5"
 ones have been full for a while and the 5 1/4" are about full too.
>>> Full with ODDs? Or drive cages? You can get 3×3.5″ cages which install into 
> ^
>
> That should have been 5×3.5″. Too many threes and fives floatin’ around in 
> my head and it’s getting late.
>

Honestly, I read it the way you meant it.  lol  I've got about three
different kinds in my wish list.  Eventually, I'll take the side off and
see which one will work.  I also found one that I think can be used as a
external case.  It has a fan, power plug and eSATA connectors.  I think
it holds five drives.  If I get that, I just may scrap the setup I
currently have and have one l

[gentoo-user] Password questions, looking for opinions. cryptsetup question too.

2023-09-18 Thread Dale
Howdy,

As some know, I encrypt a lot of stuff here.  I use passwords that I can
recall but no one could ever guess.  I don't use things that someone may
figure out like pet's name or anything like that.  I use a couple sites
to see just how good my passwords are.  I try to get into the millions
of years at least.  I have a couple that it claims is in the trillions
of years to crack.  I've read some things not to use like pet names and
such.  I've also read that one should use upper and lower case letters,
symbols and such and I do that, especially on my stuff I never want to
be cracked.  Some stuff, when I'm dead, it's gone.

In the real world tho, how do people reading this make passwords that no
one could ever guess?  I use Bitwarden to handle website passwords and
it does a good job.  I make up my own tho when encrypting drives.  I'm
not sure I can really use Bitwarden for that given it is a command line
thing, well, in a script in my case.  I doubt anyone would ever guess
any of my passwords but how do people reading this do theirs?  Just how
far do you really go to make it secure?  Obviously you shouldn't give up
much detail but just some general ideas.  Maybe even a example or two of
a fake password, just something that you would come up with and how. 

This is the two sites I use. 


https://www.passwordmonster.com/

https://www.security.org/how-secure-is-my-password/


I have a password in the first one that shows this:


It would take a computer about 63 thousand years to crack your password


Second one says this.

It would take a computer about 5 million years to crack your password

Exact same password in both.  Why such a large range to crack?  I tend
to use the first site to create a password.  Then I test it in the
second site to sort of confirm it.  If both say a long time, then I got
a fairly good one depending on what I'm protecting.  Still, why such a
difference?  One reason I use the first site, I can make it show the
password.  The second site doesn't do that so editing it to improve
things is harder since you can't see it.  The first site makes that easy
and gives me a idea of whether I'm on the right track.  Second site
confirms it.  I did contact the second site and ask for a button to show
the password.  After all, no one is here but me.  My windows are covered. 

Also, I use  cryptsetup luksFormat -s 512 ... to encrypt things.  Is
that 512 a good number?  Can it be something different?  I'd think since
it is needed as a option, it can have different values and encrypt
stronger or weaker.  Is that the case?  I've tried to find out but it
seems everyone uses 512.  If that is the only value, why make it a
option?  I figure it can have other values but how does that work? 
Heck, a link to some good info on that would be good.  :-)

Thoughts?  Opinions?  Suggestions? 

Dale

:-)  :-) 



Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 06:40:52PM -0500 schrieb Dale:

> >> I tend to need quite a few PCIe slots.  I like to have my own video
> >> card.  I never liked the built in ones.
> > You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may 
> > have asked that before).
> >
> > I get it when you wanna do it your way because it always worked™ (which is 
> > not wrong — don’t misunderstand me) and perhaps you had some bad experience 
> > in the past. OTOH it’s a pricey component usually only needed by gamers and 
> > number crunchers. On-board graphics are just fine for Desktop and even 
> > (very) light gaming and they lower power draw considerably. Give it a 
> > swirl, 
> > maybe you like it. :) Both Intel and AMD work just fine with the kernel 
> > drivers.
> 
> Well, for one, I usually upgrade the video card several times before I
> upgrade the mobo.  When it is built in, not a option.  I think I'm on my
> third in this rig.
>
> I also need multiple outputs, two at least.

That is not a problem with iGPUs. The only thing to consider is the type of 
video connectors on the board. Most have two classical ones, some three, 
divided among HDMI and DP. And the fancy ones use USB-C with DisplayPort 
alternative mode. Also, dGPUs draw a lot more when using two displays.

> One for
> monitor and one for TV.  My little NAS box I'm currently using is a Dell
> something.  The video works but it has no GUI.  At times during the boot
> up process, things don't scroll up the screen.  I may be missing a
> setting somewhere but when it blanks out, it comes back with a different
> resolution and font size.

In case you use Grub, it has an option to keep the UEFI video mode.
So there would be no switching if UEFI already starts with the proper 
resolution.

> My Gentoo box doesn't do that.  I can see the screen from BIOS all the
> way to when it finishes booting and the GUI comes up.  I'm one of those
> who watches.  ;-)

Yeah, and it’s neat if there is no flickering or blanking. So modern and 
clean.

> >> Figure the case is a
> >> good place to start.  Mobo, CPU and such next.  Figure mobo will pick
> >> memory for me since usually only one or two will work anyway. 
> > One or two what?
> 
> One or two types of memory.  Usually, plain or ECC.  Mobos usually are
> usually pretty picky on their memory. 

Hm… while I haven’t used that many different components in my life, so far 
I have not had a system not accept any RAM. Just stick to the big names, I 
guess.

> >> Since no one mentioned a better case, that Define thing may end up being
> >> it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
> >> I bought my current case, which has space for five 3.5" and six 5 1/4"
> >> drives, I thought I'd never fill up just the 3.5" ones.  Now, the 3.5"
> >> ones have been full for a while and the 5 1/4" are about full too.
> > Full with ODDs? Or drive cages? You can get 3×3.5″ cages which install into 
^

That should have been 5×3.5″. Too many threes and fives floatin’ around in 
my head and it’s getting late.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The majority of people have an above-average number of legs.


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Dale
Frank Steinmetzger wrote:
> Am Mon, Sep 18, 2023 at 02:20:56PM -0500 schrieb Dale:
>
 […]
 The downside, only micro ATX and
 mini ITX mobo.  This is a serious down vote here.
>>> Why is that bad? µATX comes with up to four PCIe slots. Even for ten 
>>> drives, 
>>> you only need one SATA expander (with four or six on-board). Perhaps a fast 
>>> network card if one is needed, that makes two slots. You don’t get more RAM 
>>> slots with ATX, either. And, if not anything else, a smaller board means 
>>> (or can mean) lower power consumption and thus less heat.
>>>
>>> Speaking of RAM; might I interest you in server-grade hardware? The reason 
>>> being that you can then use ECC memory, which is a nice perk for storage.¹ 
>>> Also, the chance is higher to get sufficient SATA connectors on-board 
>>> (maybe 
>>> in the form of an SFF connector, which is actually good, since it means 
>>> reduced “cable salad”).
>>> AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade 
>>> board, 
>>> because they too support ECC. And DDR5 has basic (meaning 1 bit and 
>>> transparent to the OS) ECC built-in from the start.
>> I tend to need quite a few PCIe slots.  I like to have my own video
>> card.  I never liked the built in ones.
> You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may 
> have asked that before).
>
> I get it when you wanna do it your way because it always worked™ (which is 
> not wrong — don’t misunderstand me) and perhaps you had some bad experience 
> in the past. OTOH it’s a pricey component usually only needed by gamers and 
> number crunchers. On-board graphics are just fine for Desktop and even 
> (very) light gaming and they lower power draw considerably. Give it a swirl, 
> maybe you like it. :) Both Intel and AMD work just fine with the kernel 
> drivers.

Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.  I also need multiple outputs, two at least.  One for
monitor and one for TV.  My little NAS box I'm currently using is a Dell
something.  The video works but it has no GUI.  At times during the boot
up process, things don't scroll up the screen.  I may be missing a
setting somewhere but when it blanks out, it comes back with a different
resolution and font size.  I figure it is blanking during the switch. 
My Gentoo box doesn't do that.  I can see the screen from BIOS all the
way to when it finishes booting and the GUI comes up.  I'm one of those
who watches.  ;-)


>> I also have never had a good built in network port to work right either.  
>> Every one of them always had problems if they worked at all.
> I faintly remember a thread about that from long ago. But the same thought 
> applies: in case you buy a new board, give it a try. Keep away from Intel 
> I225-V though, that 2.5 GbE chip has a design flaw but manufacturers still 
> use int.
>
>> I also need PCIe slots for SATA expander cards.
> That’s the use case I mostly thought of. Irritatingly, I just looked at my 
> price comparison site for SATA expansion cards and all 8×SATA cards are PCIe 
> 2.0 with either two or even just one lane. -_- So not even PCIe 3.0×1, which 
> is the same speed as 2.0×2 but would fit in a ×1 slot which many boards 
> have in abundance.
>
> 2.0×2 is about 1 GB/s. Divided by 8 drives gives you 125 MB/s/drive.

There's always going to be a bottle neck somewhere, I just try to
minimize it, if I can.  Plus, two cards, if one fails, at least I have a
2nd to play with.  I may can get one VG at a time up and running.

>> If I use
>> the Define case, I'd like to spread that across at least two cards,
>> maybe three.  So, network, video and at least a couple SATA cards,
>> adding up fast.  Sometimes, I wouldn't mind having the larger ATX with
>> extra PCIe slots.  Thought about having SAS cards and cables that
>> convert to SATA.  I think they do that.  That may make it just one
>> card.  I dunno.  I haven't dug deep into that yet.
> After the disappointment with the SATA expanders I looked at SAS cards.
> They are well connected on the PCIe side (2.0×8 or 3.0×8) and they are 
> compatible with SATA drives. I found an Intel SAS card with four SFF 
> connectors (meaning 16 drives!) for a little over 100 €. It’s called 
> RMSP3JD160J. I don’t know why it is so cheap, though. Because the 
> second-cheapest competitor is already at 190 €.

I did a quick search, only one found and it is listed as parts or not
working.  Still, could lead me to more options tho.  It would likely be
a good idea to use SAS.  Plus, if I start buying SAS drives, I'm ready. 
I sometimes find a good deal on a SAS drive. 


>> Figure the case is a
>> good place to start.  Mobo, CPU and such next.  Figure mobo will pick
>> memory for me since usually only one or two will work anyway. 
> One or two what?

One or two types of memory.  Usually, plain or ECC.  Mobos usually are
usually pretty

Re: [gentoo-user] Controlling emerges

2023-09-18 Thread William Kenworthy

per package env variables?

https://wiki.gentoo.org/wiki//etc/portage/package.env

BillK


Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 02:20:56PM -0500 schrieb Dale:

> >> […]
> >> The downside, only micro ATX and
> >> mini ITX mobo.  This is a serious down vote here.
> > Why is that bad? µATX comes with up to four PCIe slots. Even for ten 
> > drives, 
> > you only need one SATA expander (with four or six on-board). Perhaps a fast 
> > network card if one is needed, that makes two slots. You don’t get more RAM 
> > slots with ATX, either. And, if not anything else, a smaller board means 
> > (or can mean) lower power consumption and thus less heat.
> >
> > Speaking of RAM; might I interest you in server-grade hardware? The reason 
> > being that you can then use ECC memory, which is a nice perk for storage.¹ 
> > Also, the chance is higher to get sufficient SATA connectors on-board 
> > (maybe 
> > in the form of an SFF connector, which is actually good, since it means 
> > reduced “cable salad”).
> > AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade 
> > board, 
> > because they too support ECC. And DDR5 has basic (meaning 1 bit and 
> > transparent to the OS) ECC built-in from the start.
> 
> I tend to need quite a few PCIe slots.  I like to have my own video
> card.  I never liked the built in ones.

You’re just asking to be asked. ;-) Why don’t you like them? (I fear I may 
have asked that before).

I get it when you wanna do it your way because it always worked™ (which is 
not wrong — don’t misunderstand me) and perhaps you had some bad experience 
in the past. OTOH it’s a pricey component usually only needed by gamers and 
number crunchers. On-board graphics are just fine for Desktop and even 
(very) light gaming and they lower power draw considerably. Give it a swirl, 
maybe you like it. :) Both Intel and AMD work just fine with the kernel 
drivers.

> I also have never had a good built in network port to work right either.  
> Every one of them always had problems if they worked at all.

I faintly remember a thread about that from long ago. But the same thought 
applies: in case you buy a new board, give it a try. Keep away from Intel 
I225-V though, that 2.5 GbE chip has a design flaw but manufacturers still 
use int.

> I also need PCIe slots for SATA expander cards.

That’s the use case I mostly thought of. Irritatingly, I just looked at my 
price comparison site for SATA expansion cards and all 8×SATA cards are PCIe 
2.0 with either two or even just one lane. -_- So not even PCIe 3.0×1, which 
is the same speed as 2.0×2 but would fit in a ×1 slot which many boards 
have in abundance.

2.0×2 is about 1 GB/s. Divided by 8 drives gives you 125 MB/s/drive.

> If I use
> the Define case, I'd like to spread that across at least two cards,
> maybe three.  So, network, video and at least a couple SATA cards,
> adding up fast.  Sometimes, I wouldn't mind having the larger ATX with
> extra PCIe slots.  Thought about having SAS cards and cables that
> convert to SATA.  I think they do that.  That may make it just one
> card.  I dunno.  I haven't dug deep into that yet.

After the disappointment with the SATA expanders I looked at SAS cards.
They are well connected on the PCIe side (2.0×8 or 3.0×8) and they are 
compatible with SATA drives. I found an Intel SAS card with four SFF 
connectors (meaning 16 drives!) for a little over 100 €. It’s called 
RMSP3JD160J. I don’t know why it is so cheap, though. Because the 
second-cheapest competitor is already at 190 €.

> Figure the case is a
> good place to start.  Mobo, CPU and such next.  Figure mobo will pick
> memory for me since usually only one or two will work anyway. 

One or two what?

> > I was going to upgrade my 9 years old Haswell system at some point to a new 
> > Ryzen build. Have been looking around for parts and configs for perhaps two 
> > years now but I can’t decide (perhaps some remember previous ramblings 
> > about 
> > that). Now I actually consider buing a tiny Deskmini X300 after I found out 
> > that it does support ACPI S3, but only with a specific UEFI version. No 
> > 10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)
> 
> I thought about using a Raspberry Pi for a NAS box.  Just build more
> than one of them.  Thing is, finding the parts for it is almost
> impossible right now.  They kinda went away a couple years ago when
> things got crazy. 

I was talking main PC use case, not NAS. :)
The minimalist form factor doesn’t really impede me. I don’t have any HDDs 
in my PC anymore (too noisy), so why keep space for it. And while I do like 
to game a little bit, I find a full GPU too expensive and hungry, because it 
will be bored most of the time.

The rest can be done with USB, which is the only thing a compact case often 
lacks in numbers.

> Since no one mentioned a better case, that Define thing may end up being
> it.  That Gamemax is cheaper but a lot less drive capacity.  Heck, when
> I bought my current case, which has space for five 3.5" and six 5 1/4"
> drives, I thought I'd never fill up just

Re: [gentoo-user] Invalid opcode after kernel update

2023-09-18 Thread Peter Böhm
Am Montag, 18. September 2023, 20:52:27 CEST schrieb Fernando Rodriguez:
> On 9/18/23 11:04, Fernando Rodriguez wrote:
> > On 9/17/23 18:03, Alan Mackenzie wrote:
> > I will try to run it on gdb to find out which instruction is triggering
> > the fault.
> >
> > Thanks,
> > Fernando
>
> The crash is happening on AVX2 instructions. My CPU is Intel(R) Core(TM)
> i7-8809G CPU @ 3.10GHz and it's supposed to have AVX2 but I don't see it
> listed on /proc/cpuinfo. I can't reboot into the old kernel right now
> but I suspect that when I do it will be there because I kind of remember
>   seeing it there. Any clues?

It is Intel DOWNFALL, also called GDS Gather Data Sampling.

Maybe you want read: https://www.phoronix.com/review/downfall

Regards,
 Peter





Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 02:59:22PM -0400 schrieb Rich Freeman:

> > I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my
> > last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several
> > watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W
> > from the plug at idle — that is after I enabled all powersaving items in
> > powertop. Without them, it is around 10 W more. It has two gigabit ports
> > (plus IPMI port) and a 300 W 80+ gold PSU.
> 
> That's an ITX system though, and a very old one at that.

Well, you asked for entry-point server hardware with low idle consumption. 
;-)

I built it in November 2016. Even then it was old componentry, but I wanted 
to save €€€ and it was enough for my needs. I installed a Celeron G1840 for 
33 € because I thought it would be enough. I tested its AES performance 
beforehand (because it didn’t have AES-NI) and with 155 MB/s it was enough 
to saturate GbE. But since I ran ZFS on LUKS at the time (still do, until I 
change the setup for more capacity), I ran into a bottleneck during scrubs. 
So after a year, I paid over 100 € for the i3 which I should have bought 
from the get-go. :-/

> Not sure how
> useful more PCIe lanes are in a form factor like that.

Modern boards might come with NVMe slots that can be re-purposed for 
external cards.

-- 
Grüße | Greetings | Salut | Qapla’
Please do not share anything from, with or about me on any social network.

The more cheese, the more holes.  The more holes, the less cheese.
Ergo: the more cheese, the less cheese!


signature.asc
Description: PGP signature


Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Dale
Frank Steinmetzger wrote:
> Am Mon, Sep 18, 2023 at 12:17:20AM -0500 schrieb Dale:
>> Howdy,
>> […]
>> I've found a few cases that peak my interest depending on which way I go
>> with this.  One I found that has a lot of hard drive space and would
>> make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
>> thing but can hold a LOT of spinning rust. 10 drives plus I think space
>> for a SSD for the OS as well.
> These days you can always put your OS on an NVMe; faster access and two 
> fewer cables in the case (or one more slot for a data drive).
>

Well, I already have one SSD, sitting in the box in my safe.  I was
going to move the OS on my current rig but just haven't had time and
energy to fool with it.  I plan to move the current mobo and such to the
NAS box.  It doesn't have anything but the SSD as a option.  The new
build, it will likely have a NVMe thingy tho.  That would be a good idea
for it.  Then put SSD in what is my current rig but moved to NAS box. 
Hmmm, there's a thought. 

>> […]
>> The downside, only micro ATX and
>> mini ITX mobo.  This is a serious down vote here.
> Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives, 
> you only need one SATA expander (with four or six on-board). Perhaps a fast 
> network card if one is needed, that makes two slots. You don’t get more RAM 
> slots with ATX, either. And, if not anything else, a smaller board means 
> (or can mean) lower power consumption and thus less heat.
>
> Speaking of RAM; might I interest you in server-grade hardware? The reason 
> being that you can then use ECC memory, which is a nice perk for storage.¹ 
> Also, the chance is higher to get sufficient SATA connectors on-board (maybe 
> in the form of an SFF connector, which is actually good, since it means 
> reduced “cable salad”).
> AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade board, 
> because they too support ECC. And DDR5 has basic (meaning 1 bit and 
> transparent to the OS) ECC built-in from the start.

I tend to need quite a few PCIe slots.  I like to have my own video
card.  I never liked the built in ones.  I also have never had a good
built in network port to work right either.  Every one of them always
had problems if they worked at all.  PIC(e) network cards have always
worked great.  I also need PCIe slots for SATA expander cards.  If I use
the Define case, I'd like to spread that across at least two cards,
maybe three.  So, network, video and at least a couple SATA cards,
adding up fast.  Sometimes, I wouldn't mind having the larger ATX with
extra PCIe slots.  Thought about having SAS cards and cables that
convert to SATA.  I think they do that.  That may make it just one
card.  I dunno.  I haven't dug deep into that yet.  Figure the case is a
good place to start.  Mobo, CPU and such next.  Figure mobo will pick
memory for me since usually only one or two will work anyway. 


>> I was hoping to turn
>> my current rig into a NAS.  The mobo and such parts.  This won't be a
>> option with this case.  Otherwise, it gives ideas on what I'm looking
>> for.  And not.  ;-)
> I was going to upgrade my 9 years old Haswell system at some point to a new 
> Ryzen build. Have been looking around for parts and configs for perhaps two 
> years now but I can’t decide (perhaps some remember previous ramblings about 
> that). Now I actually consider buing a tiny Deskmini X300 after I found out 
> that it does support ACPI S3, but only with a specific UEFI version. No 
> 10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)

I thought about using a Raspberry Pi for a NAS box.  Just build more
than one of them.  Thing is, finding the parts for it is almost
impossible right now.  They kinda went away a couple years ago when
things got crazy. 


>> Another find.  The Fractal Design Define 7 XL.  This is more of a tower
>> type shape like my current rig.  I think I read with extra trays, it can
>> hold up to 18 drives.  One could have a fancy RAID setup and still have
>> huge storage space with that.  I think it also has SSD spots for drives
>> that could hold the OS itself.  This one is quite pricey tho.
> With so many drives, you should also include a pricey power supply. And/or a 
> server board which supports staggered spin-up. Also, drives of the home NAS 
> category (and consumer drives anyways) are only certified for operation in 
> groups of up to 8-ish. Anything above and you sail in grey warranty waters. 
> Higher-tier drives are specced for the vibrations of so many drives (at 
> least I hope, because that’s what they™ tell us).
>

I usually get a larger than needed power supply anyway.  I got a 650 or
700 watt in my current rig.  According to the UPS, I run less than 300
watts even when compiling and that includes the puter, monitor, router,
modem and a couple power supplies for external hard drives.  I tend to
plug rest into regular outlets, not for battery backup power.  I figure
the power f

Re: [gentoo-user] Invalid opcode after kernel update

2023-09-18 Thread Fernando Rodriguez

On 9/18/23 14:52, Fernando Rodriguez wrote:

On 9/18/23 11:04, Fernando Rodriguez wrote:

On 9/17/23 18:03, Alan Mackenzie wrote:
I will try to run it on gdb to find out which instruction is 
triggering the fault.


Thanks,
Fernando



The crash is happening on AVX2 instructions. My CPU is Intel(R) Core(TM) 
i7-8809G CPU @ 3.10GHz and it's supposed to have AVX2 but I don't see it 
listed on /proc/cpuinfo. I can't reboot into the old kernel right now 
but I suspect that when I do it will be there because I kind of remember 
  seeing it there. Any clues?




Found this on my journal: "GDS: Microcode update needed! Disabling AVX 
as mitigation." So I guess it's a microcode issue. I'm using dracut with 
--early-microcode and I have CONFIG_MICROCODE_INTEL set and I have the 
latest (as of friday) intel-microcode. I don't have initramfs enabled 
for intel-microcode but never did and it was working. Will try it when I 
get back, gotta run now. Any more ideas?


--

Fernando Rodriguez




Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Rich Freeman
On Mon, Sep 18, 2023 at 9:02 AM Frank Steinmetzger  wrote:
>
> Am Mon, Sep 18, 2023 at 07:16:17AM -0400 schrieb Rich Freeman:
>
> > On Mon, Sep 18, 2023 at 6:13 AM Frank Steinmetzger  wrote:
> > >
> > > Am Mon, Sep 18, 2023 at 12:17:20AM -0500 schrieb Dale:
>
> > because a NIC is going to need a 4-8x port
> > most likely
>
> Really? PCIe 3.0 has 1 GB/s/lane, that is 8 Gbps/lane, so almost as much as
> 10 GbE.

I can't find any 10GbE NICs that use a 1x slot - if you can I'll be
impressed.  In theory somebody could probably make one that uses PCIe
v4/5 or so, but I'm not seeing one today.

If it needs more than a 1x slot, then it is all moot after that, as
most consumer motherboards tend to have 1x slots, a 16x slot, and
MAYBE a 4x slot in a 16x physical form.  Oh, and good luck finding
boards with an open end on the slot, even if there would be room to
let a card dangle.

My point with micro ATX was that with consumer CPUs having so few
lanes available having room for more slots wouldn't help, as there
wouldn't be lanes available to connect to them, unless you added a
switch.  That's something else which is really rare on motherboards.
I don't get why they charge $250 for an AM5 motherboard, and maybe
even have a switch on the X series ones, but they can't be bothered to
give you larger slots.  I can't imagine that all the lanes are busy
all the time, so a switch would probably help quite a bit.

> this is probably very restricted in
> length. Which will also be the case for 10 GbE, so probably no options for
> the outhouse. :D

With an SFP+ port you can just use fiber and go considerable
distances.  That's assuming you're running network to your outhouse,
and not bothering to put a switch in there (which would be more
logical).


> I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my
> last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several
> watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W
> from the plug at idle — that is after I enabled all powersaving items in
> powertop. Without them, it is around 10 W more. It has two gigabit ports
> (plus IPMI port) and a 300 W 80+ gold PSU.

That's an ITX system though, and a very old one at that.  Not sure how
useful more PCIe lanes are in a form factor like that.

> > The advantage of
> > distributed filesystems is that you can build them out of a bunch of
> > cheap boxes […]
>
> For a simple media storage, I personally would find this too cumbersome to
> manage. Especially if you stick to Gentoo and don’t have a homogeneous
> device pool (not to mention compile times).

I don't generally use Gentoo just to run containers.  On a k8s box the
box itself basically does nothing but run k8s.  I probably only run
about 5 commands to provision one from bare metal.  :)

-- 
Rich



Re: [gentoo-user] Invalid opcode after kernel update

2023-09-18 Thread Fernando Rodriguez

On 9/18/23 11:04, Fernando Rodriguez wrote:

On 9/17/23 18:03, Alan Mackenzie wrote:
I will try to run it on gdb to find out which instruction is triggering 
the fault.


Thanks,
Fernando



The crash is happening on AVX2 instructions. My CPU is Intel(R) Core(TM) 
i7-8809G CPU @ 3.10GHz and it's supposed to have AVX2 but I don't see it 
listed on /proc/cpuinfo. I can't reboot into the old kernel right now 
but I suspect that when I do it will be there because I kind of remember 
 seeing it there. Any clues?


--

Fernando Rodriguez




Re: [gentoo-user] Controlling emerges

2023-09-18 Thread Rich Freeman
On Mon, Sep 18, 2023 at 12:13 PM Alan McKinnon  wrote:
>
> Whether you just let emerge do it's thing or try get it to do big packages on 
> their own, everything is still going to use the same number of cpu cycles 
> overall and you will save nothing.

That is true of CPU, but not RAM.  The problem with large parallel
builds is that for 95% of packages they're fine, and for a few
packages they'll eat up all the RAM in the system until the OOM killer
kicks in, or the system just goes into a swap storm (which can cause
panics with some less-than-perfect kernel drivers).

I'm not aware of any simple solutions.  I do have some packages set to
just build with a small number of jobs, but that won't prevent other
packages from being built alongside them.  Usually that is enough
though.  It is just frustrating to watch a package take all day to
build because I can't use more than -j2 or so without running out of
RAM, usually just at one step of the build process.

I can't see anybody bothering with this, but in theory packages could
have a variable to hint at the max RAM consumed per job, and the max
number of jobs it will run.  Then the package manager could take the
lesser of -j and the max jobs the package can run, multiply it by the
RAM requirement, and compare that to available memory (or have a
setting to limit max RAM).  Basically treat RAM as a resource and let
the package manager reduce -j to manage it if necessary.

Hmm, I guess a workaround would be to set ulimits on the portage user
so that emerge is killed before RAM use gets too out of hand.  That
won't help complete builds, but it would at least keep it from killing
the system.

-- 
Rich



Re: [gentoo-user] Controlling emerges

2023-09-18 Thread Dale
Alan McKinnon wrote:
>
>
> On Mon, Sep 18, 2023 at 6:03 PM Peter Humphrey  > wrote:
>
> On Monday, 18 September 2023 14:48:46 BST Alan McKinnon wrote:
> > On Mon, Sep 18, 2023 at 3:44 PM Peter Humphrey
> mailto:pe...@prh.myzen.co.uk>>
> >
> > wrote:
> > > It may be less complex than you think, Jack. I envisage a
> package being
> > > marked
> > > as solitary, and when portage reaches that package, it waits
> until all
> > > current
> > > jobs have finished, then it starts the solitary package with the
> > > environment
> > > specified for it, and it doesn't start the next one until that
> one has
> > > finished.
> > > The dependency calculation shouldn't need to be changed.
> > >
> > > It seems simple the way I see it.
> >
> > How does that improve emerge performance overall?
>
> By allocating all the system resources to huge packages while not
> flooding the
> system with lesser ones. For example, I can set -j20 for
> webkit-gtk today
> without overflowing the 64GB RAM, and still have 4 CPU threads
> available to
> other tasks. The change I've proposed should make the whole
> operation more
> efficient overall and take less time.
>
> As things stand today, I have to make do with -j12 or so, wasting
> time and
> resources. I have load-average set at 32, so if I were to set -j20
> generally
> I'd run out of RAM in no time. I've had many instances of packages
> failing to
> compile in a large update, but going just fine on their own; and
> I've had
> mysterious operational errors resulting, I suspect, from otherwise
> undetected
> miscompilation.
>
> Previous threads have more detail of what I've tried already.
>
>
> I did read all those but no matter how you move things around you
> still have only X resources available all the time.
> Whether you just let emerge do it's thing or try get it to do big
> packages on their own, everything is still going to use the same
> number of cpu cycles overall and you will save nothing.
>
> If webkit-gtk is the only big package, have you considered:
>
> emerge -1v webkit-gtk && emerge -avuND @world?
>
>
> What you have is not a portage problem. It is a orthodox parallelism
> problem, and I think you are thinking your constraint is unique in the
> work - it isn't.
> With parallelism, trying to fiddle single nodes to improve things
> overall never really works out.
>
> Just my $0.02
>
>
> Alan
>
> -- 
> Alan McKinnon
> alan dot mckinnon at gmail dot com


I have to admit, I wish I could tell emerge to compile certain packages
on their own as well.  LOo, that qtweb package and a few others. 
Sometimes they end up naturally compiling on their own but sometimes, I
end up with LOo, Seamonkey or Firefox, or that qtweb package trying to
compile at the same time in some combination.  Sometimes, all four hit
at once.  It's bad enough when it is just two of them but when they all
hit, it causes problems.  It would be nice if we could set up a list
that tells emerge to emerge only one at a time just like we tell it not
to use tmpfs for certain builds. 

While just emerging them first might work, it also limits emerge to just
doing that package instead of the whole update.  It also could have
dependencies that also want a lot of resources.  I don't know about most
people but I run my updates while I sleep.  Having the option to set
that up would be nice.  It's not like packages are getting any smaller
either.  This is a growing problem. 

I have no idea how to do this but I do like the idea. 

Dale

:-)  :-) 


Re: [gentoo-user] Controlling emerges

2023-09-18 Thread John Blinka
On Mon, Sep 18, 2023 at 12:13 PM Alan McKinnon 
wrote:

>
>
> If webkit-gtk is the only big package, have you considered:
>
> emerge -1v webkit-gtk && emerge -avuND @world?
>
>
> What you have is not a portage problem. It is a orthodox parallelism
> problem, and I think you are thinking your constraint is unique in the work
> - it isn't.
> With parallelism, trying to fiddle single nodes to improve things overall
> never really works out.
>
> Just my $0.02
>
>
> Alan
>

I use this idea, but it requires (for me) a more sophisticated
implementation. As is, it pulls in webkit-gtk-x.y.z and
webkit-gtk-x.y.z-r410 simultaneously - for my portage setup. I don’t have
the memory to handle both at the same time. It’s guaranteed to crash on my
system.

Instead, I do a preliminary emerge -p, saving the specific package
builds to a file. I then inspect the file to see what portage wants to do.
Too often, the file contains webkit-gtk-x.y.z and webkit-gtk-x.y.z-r410 in
sequence, usually preceded and followed by other packages. Portage always
wants to build both versions simultaneously - guaranteed crash for me.

Instead of invoking emerge, I write a little bash script to emerge the
preceding packages in parallel, followed by a serial webkit-gtk-x.y.z,
followed by a serial webkit-gtk-x.y.z-r410, and then finally all the
remaining packages. Four emerge invocations in sequence. The script builds
specific versions, ie, =net-libs/webkit-gtk-x.y.z, to ensure it builds only
1 package at a time. It’s trivial to write.

A problem arises when splitting up builds as you suggest. Emerge has its
own ideas about what it’s going to do - and in what sequence. When you try
to impose a build order not of its making, emerge will often do something
unintuitive and frustrating to you. I’ve learned to respect its sequencing.
This technique keeps portage happy and predictable by using its sequencing.
It gives me reliable overnight unattended upgrades.

John Blinka

>


Re: [gentoo-user] Certain packages refuse to use binary from build save, the -k thing.

2023-09-18 Thread Dale
Frank Steinmetzger wrote:
> Am Fri, Sep 15, 2023 at 11:44:03PM -0500 schrieb Dale:
>> Howdy,
> Hi
>
> instead of going berserk mode and wasting kWh on rebuilding “just in case it 
> might help”, why not try and dig a little deeper.
>
>> A couple of my video players are not playing videos correctly.
> Whicch players?
>
>> I've
>> rebuilt a few things but no change.  Some work fine, some give a error
>> about a bad index or something, some just don't even try at all.
> What is the exact error message?
> Can you remux the file with ffmpeg?
> `ffmpeg -i Inputfile.ext -c copy -map 0 Outputfile.ext`
> This will take all streams of the input file and put them into a new file 
> without re-encoding. Does ffmpeg give any errors about bad data? Can you 
> play the produced file?
>
>> The videos come from various sources and are of different file extensions. 
> Can we have a look at some of those videos? The least I can try is to see 
> whether they work here or show any sign of corruption.
>


I think one of the videos had something bad that was causing other
issues somehow.  I ended up deleting any video that didn't have a
thumbnail and rebooting, to make sure all processes were killed.  This
started with I believe a bad file.  Then it affected other videos I
tried to watch.  Once I rebooted, everything was back to normal.  While
I think it was a bad video, I'm not sure it was that either.  It is odd
that rebooting fixed it tho. 

I only had to rebuild a few packages since I have it set to save
binaries of everything I build.  It's just that those few packages
wouldn't work for some reason.  I think it may have had something to do
with video file problem.  After the reboot, everything worked as
expected, including installing from binaries.  So, it's all good now. 

I was actually in the process of starting a fresh thread on the video
problem.  I was collecting info when I started seeing some odd things. 
I decided to reboot to see what that did.  I wasn't expecting it to fix
the whole problem, this isn't windoze after all, but it did.  It's been
running fine ever since. 

Now I just wish I knew exactly what the cause was.  Was it the video,
something else, something I'd never think of?  Lots of questions.  No
real way to answer them tho.  Hard to diagnose something that is
working.  lol

Thanks for offering to look into it tho.  I was working on it when the
reboot fixed it. 

Now to read and think over replies to thread about a new case.  Sort of
skimmed over it.  Sounds like ideas for better mobos, which may be a
good idea. 

Dale

:-)  :-) 



Re: [gentoo-user] Controlling emerges

2023-09-18 Thread Michael
On Monday, 18 September 2023 17:13:04 BST Alan McKinnon wrote:
> On Mon, Sep 18, 2023 at 6:03 PM Peter Humphrey 
> 
> wrote:
> > On Monday, 18 September 2023 14:48:46 BST Alan McKinnon wrote:
> > > On Mon, Sep 18, 2023 at 3:44 PM Peter Humphrey 
> > > 
> > > wrote:
> > > > It may be less complex than you think, Jack. I envisage a package
> > > > being
> > > > marked
> > > > as solitary, and when portage reaches that package, it waits until all
> > > > current
> > > > jobs have finished, then it starts the solitary package with the
> > > > environment
> > > > specified for it, and it doesn't start the next one until that one has
> > > > finished.
> > > > The dependency calculation shouldn't need to be changed.
> > > > 
> > > > It seems simple the way I see it.
> > > 
> > > How does that improve emerge performance overall?
> > 
> > By allocating all the system resources to huge packages while not flooding
> > the
> > system with lesser ones. For example, I can set -j20 for webkit-gtk today
> > without overflowing the 64GB RAM, and still have 4 CPU threads available
> > to
> > other tasks. The change I've proposed should make the whole operation more
> > efficient overall and take less time.
> > 
> > As things stand today, I have to make do with -j12 or so, wasting time and
> > resources. I have load-average set at 32, so if I were to set -j20
> > generally
> > I'd run out of RAM in no time. I've had many instances of packages failing
> > to
> > compile in a large update, but going just fine on their own; and I've had
> > mysterious operational errors resulting, I suspect, from otherwise
> > undetected
> > miscompilation.
> > 
> > Previous threads have more detail of what I've tried already.
> > 
> > 
> > I did read all those but no matter how you move things around you still
> 
> have only X resources available all the time.
> Whether you just let emerge do it's thing or try get it to do big packages
> on their own, everything is still going to use the same number of cpu
> cycles overall and you will save nothing.
> 
> If webkit-gtk is the only big package, have you considered:
> 
> emerge -1v webkit-gtk && emerge -avuND @world?
> 
> 
> What you have is not a portage problem. It is a orthodox parallelism
> problem, and I think you are thinking your constraint is unique in the work
> - it isn't.
> With parallelism, trying to fiddle single nodes to improve things overall
> never really works out.
> 
> Just my $0.02
> 
> 
> Alan

I think there is a level of complexity involved which will make (m)any 
attempts on optimisation difficult, because EMERGE_DEFAULT_OPTS competes for 
resources against MAKEOPTS, resulting in a trade-off between their optimal 
settings.  Parallelisation becomes difficult to maximise on the basis of some 
presets when not all updates have the same combination of small Vs large 
packages, dependent packages queue up before dependencies are built, various 
emerge stages are processed linearly, some versions of gcc may get hungrier 
for RAM and whatever else I haven't accounted for.

Someone with a PhD on multivariate stochastic analysis could probably come up 
with some nifty code to include in portage?  ;-)


signature.asc
Description: This is a digitally signed message part.


RE: [gentoo-user] Controlling emerges

2023-09-18 Thread Laurence Perkins


> From: Alan McKinnon alan.mckin...@gmail.com
> Sent: Monday, September 18, 2023 9:13 AM
> To: gentoo-user@lists.gentoo.org
> Subject: Re: [gentoo-user] Controlling emerges
>
>
>
> On Mon, Sep 18, 2023 at 6:03 PM Peter Humphrey 
> pe...@prh.myzen.co.uk wrote:
> On Monday, 18 September 2023 14:48:46 BST Alan McKinnon wrote:
> > On Mon, Sep 18, 2023 at 3:44 PM Peter Humphrey 
> > pe...@prh.myzen.co.uk
> >
> > wrote:
> > > It may be less complex than you think, Jack. I envisage a package being
> > > marked
> > > as solitary, and when portage reaches that package, it waits until all
> > > current
> > > jobs have finished, then it starts the solitary package with the
> > > environment
> > > specified for it, and it doesn't start the next one until that one has
> > > finished.
> > > The dependency calculation shouldn't need to be changed.
> > >
> > > It seems simple the way I see it.
> >
> > How does that improve emerge performance overall?
>
> By allocating all the system resources to huge packages while not flooding the
> system with lesser ones. For example, I can set -j20 for webkit-gtk today
> without overflowing the 64GB RAM, and still have 4 CPU threads available to
> other tasks. The change I've proposed should make the whole operation more
> efficient overall and take less time.
>
> As things stand today, I have to make do with -j12 or so, wasting time and
> resources. I have load-average set at 32, so if I were to set -j20 generally
> I'd run out of RAM in no time. I've had many instances of packages failing to
> compile in a large update, but going just fine on their own; and I've had
> mysterious operational errors resulting, I suspect, from otherwise undetected
> miscompilation.
>
> Previous threads have more detail of what I've tried already.
>
> I did read all those but no matter how you move things around you still have 
> only X resources available all the time.
> Whether you just let emerge do it's thing or try get it to do big packages on 
> their own, everything is still going to use the same number of cpu cycles 
> overall and you will save nothing.
>
> If webkit-gtk is the only big package, have you considered:
>
> emerge -1v webkit-gtk && emerge -avuND @world?
>
>
> What you have is not a portage problem. It is a orthodox parallelism problem, 
> and I think you are thinking your constraint is unique in the work - it isn't.
> With parallelism, trying to fiddle single nodes to improve things overall 
> never really works out.
>
> Just my $0.02
>
>
> Alan
>
> --
> Alan McKinnon
> alan dot mckinnon at gmail dot com
>

Note that on my systems I just make heavy use of the various load-average 
limiting options and as long as two of the big packages don't start within 
seconds of each other it does a pretty good job of letting them run by 
themselves.

If things do get in a snarl, you can always use kill -18/19 to suspend a few 
compile jobs until the system stops thrashing and resume them as capacity 
permits.

LMP


Re: [gentoo-user] Controlling emerges

2023-09-18 Thread Alan McKinnon
On Mon, Sep 18, 2023 at 6:03 PM Peter Humphrey 
wrote:

> On Monday, 18 September 2023 14:48:46 BST Alan McKinnon wrote:
> > On Mon, Sep 18, 2023 at 3:44 PM Peter Humphrey 
> >
> > wrote:
> > > It may be less complex than you think, Jack. I envisage a package being
> > > marked
> > > as solitary, and when portage reaches that package, it waits until all
> > > current
> > > jobs have finished, then it starts the solitary package with the
> > > environment
> > > specified for it, and it doesn't start the next one until that one has
> > > finished.
> > > The dependency calculation shouldn't need to be changed.
> > >
> > > It seems simple the way I see it.
> >
> > How does that improve emerge performance overall?
>
> By allocating all the system resources to huge packages while not flooding
> the
> system with lesser ones. For example, I can set -j20 for webkit-gtk today
> without overflowing the 64GB RAM, and still have 4 CPU threads available
> to
> other tasks. The change I've proposed should make the whole operation more
> efficient overall and take less time.
>
> As things stand today, I have to make do with -j12 or so, wasting time and
> resources. I have load-average set at 32, so if I were to set -j20
> generally
> I'd run out of RAM in no time. I've had many instances of packages failing
> to
> compile in a large update, but going just fine on their own; and I've had
> mysterious operational errors resulting, I suspect, from otherwise
> undetected
> miscompilation.
>
> Previous threads have more detail of what I've tried already.
>
>
> I did read all those but no matter how you move things around you still
have only X resources available all the time.
Whether you just let emerge do it's thing or try get it to do big packages
on their own, everything is still going to use the same number of cpu
cycles overall and you will save nothing.

If webkit-gtk is the only big package, have you considered:

emerge -1v webkit-gtk && emerge -avuND @world?


What you have is not a portage problem. It is a orthodox parallelism
problem, and I think you are thinking your constraint is unique in the work
- it isn't.
With parallelism, trying to fiddle single nodes to improve things overall
never really works out.

Just my $0.02


Alan

-- 
Alan McKinnon
alan dot mckinnon at gmail dot com


Re: [gentoo-user] Controlling emerges

2023-09-18 Thread Peter Humphrey
On Monday, 18 September 2023 14:48:46 BST Alan McKinnon wrote:
> On Mon, Sep 18, 2023 at 3:44 PM Peter Humphrey 
> 
> wrote:
> > It may be less complex than you think, Jack. I envisage a package being
> > marked
> > as solitary, and when portage reaches that package, it waits until all
> > current
> > jobs have finished, then it starts the solitary package with the
> > environment
> > specified for it, and it doesn't start the next one until that one has
> > finished.
> > The dependency calculation shouldn't need to be changed.
> > 
> > It seems simple the way I see it.
> 
> How does that improve emerge performance overall?

By allocating all the system resources to huge packages while not flooding the 
system with lesser ones. For example, I can set -j20 for webkit-gtk today 
without overflowing the 64GB RAM, and still have 4 CPU threads available to 
other tasks. The change I've proposed should make the whole operation more 
efficient overall and take less time.

As things stand today, I have to make do with -j12 or so, wasting time and 
resources. I have load-average set at 32, so if I were to set -j20 generally 
I'd run out of RAM in no time. I've had many instances of packages failing to 
compile in a large update, but going just fine on their own; and I've had 
mysterious operational errors resulting, I suspect, from otherwise undetected 
miscompilation.

Previous threads have more detail of what I've tried already.

-- 
Regards,
Peter.






Re: [gentoo-user] Invalid opcode after kernel update

2023-09-18 Thread Fernando Rodriguez

On 9/17/23 18:03, Alan Mackenzie wrote:

Hello, Fernando.

On Sun, Sep 17, 2023 at 17:49:22 -0400, Fernando Rodriguez wrote:

A few months ago after updating my kernel I started getting an invalid
opcode error during boot on the init process on my initramfs which I did
rebuilt. Switching to the old kernel and initramfs fixed the problem so
I kept that kernel for a few months for lack of time.



Today I rebuilt the whole system using `emerge -e @world` and after that
I'm able to boot the new kernel but now some pre-compiled packages (and
some that emerge -e missed because the ebuild was masked) crash with
illegal opcode. In the case of chrome it's not crashing but it only
renders garbage for webpages.



Does anyone have a clue what is happening? It's like the instruction set
changed after the kernel update (or was it the microcode?)


Could it be that you've got a sporadic RAM failure?  Running the
standard RAM test (the one you boot into, I've forgotten its name) for
many hours might pin down the problem.


I ran the test to be sure but it's not sporadic. It happens all the time 
with the same pre-built binaries. My last working kernel was 5.15.122, 
if I boot from that kernel everything works. Before the update 
everything was built with -march=native and before the 'emerge -e' I 
switched to -mtune=generic but I don't think it was the flags that 
messed it up but the act of rebuilding because after rebuilding the 
whole system I'm still having issues with pre-compiled binaries and 
those should be generic builds. Strangely the same binaries that crash 
on the host system run fine on a VM using hw virtualization.


I will try to run it on gdb to find out which instruction is triggering 
the fault.


Thanks,
Fernando




Re: [gentoo-user] Controlling emerges

2023-09-18 Thread Alan McKinnon
On Mon, Sep 18, 2023 at 3:44 PM Peter Humphrey 
wrote:

>
>
> It may be less complex than you think, Jack. I envisage a package being
> marked
> as solitary, and when portage reaches that package, it waits until all
> current
> jobs have finished, then it starts the solitary package with the
> environment
> specified for it, and it doesn't start the next one until that one has
> finished.
> The dependency calculation shouldn't need to be changed.
>
> It seems simple the way I see it.
>


How does that improve emerge performance overall?

-- 
Alan McKinnon
alan dot mckinnon at gmail dot com


Re: [gentoo-user] Controlling emerges

2023-09-18 Thread Peter Humphrey
On Monday, 18 September 2023 13:59:03 BST Jack wrote:
> On 9/18/23 08:00, Peter Humphrey wrote:
> > Hello list,
> > 
> > We've had a few discussions here on how to balance the parameters to
> > emerge
> > to make the most of the resources available. Here's another idea:
> > 
> > One the one hand, big jobs should be able to use the maximum CPU
> > performance and RAM capacity, but on the other we don't want to flood the
> > system.
> > 
> > Therefore, I think it would be useful to be able to specify in env and
> > package.env that a job should be run on its own - if any other emerge jobs
> > are scheduled, wait until they're finished. Combine that with a specific
> > MAKEOPTS, and we'd have a more flexible deployment of resouces.
> > 
> > Is this feasible? What have I not thought of?
> 
> I've had exactly the same thought for some time now.  My guess is that
> it is theoretically possible to add some USE flag or ENV var for portage
> to recognize, but I don't know the portage internals well enough to
> guess how much effort it would be.  Given that portage orders ebuilds in
> a single emerge session based on some dependency graph, that seems like
> a good place to put the necessary hooks.
> 
> As a starting point, one option might be to create a special/magic
> ebuild and make it a dependency of those jobs that need to be run alone,
> and have something about it that won't run if anything else is still
> running.  But, I don't know if those pre-checks (such as checking for
> enough RAM and/or disk space) can be run at build time and not just at
> portage startup time.  The other possible problem with that approach
> would be to be sure that ebuild gets run separately for each other
> ebuild that depends on it - not all of them depending on it being run
> once.Also, those blocking ebuilds have work so that if several of them
> are queued (and running their "wait for everything else to finish"
> scripts - exactly one of them needs to start. I don't know if those
> pre-check scripts count as running before or within the ebuild itself.

It may be less complex than you think, Jack. I envisage a package being marked 
as solitary, and when portage reaches that package, it waits until all current 
jobs have finished, then it starts the solitary package with the environment 
specified for it, and it doesn't start the next one until that one has 
finished. 
The dependency calculation shouldn't need to be changed.

It seems simple the way I see it.

-- 
Regards,
Peter.






Re: [gentoo-user] Controlling emerges

2023-09-18 Thread Jack

On 9/18/23 08:00, Peter Humphrey wrote:

Hello list,

We've had a few discussions here on how to balance the parameters to emerge
to make the most of the resources available. Here's another idea:

One the one hand, big jobs should be able to use the maximum CPU
performance and RAM capacity, but on the other we don't want to flood the
system.

Therefore, I think it would be useful to be able to specify in env and
package.env that a job should be run on its own - if any other emerge jobs are
scheduled, wait until they're finished. Combine that with a specific MAKEOPTS,
and we'd have a more flexible deployment of resouces.

Is this feasible? What have I not thought of?


I've had exactly the same thought for some time now.  My guess is that 
it is theoretically possible to add some USE flag or ENV var for portage 
to recognize, but I don't know the portage internals well enough to 
guess how much effort it would be.  Given that portage orders ebuilds in 
a single emerge session based on some dependency graph, that seems like 
a good place to put the necessary hooks.


As a starting point, one option might be to create a special/magic 
ebuild and make it a dependency of those jobs that need to be run alone, 
and have something about it that won't run if anything else is still 
running.  But, I don't know if those pre-checks (such as checking for 
enough RAM and/or disk space) can be run at build time and not just at 
portage startup time.  The other possible problem with that approach 
would be to be sure that ebuild gets run separately for each other 
ebuild that depends on it - not all of them depending on it being run 
once.Also, those blocking ebuilds have work so that if several of them 
are queued (and running their "wait for everything else to finish" 
scripts - exactly one of them needs to start. I don't know if those 
pre-check scripts count as running before or within the ebuild itself.


Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 07:16:17AM -0400 schrieb Rich Freeman:

> On Mon, Sep 18, 2023 at 6:13 AM Frank Steinmetzger  wrote:
> >
> > Am Mon, Sep 18, 2023 at 12:17:20AM -0500 schrieb Dale:
> > > […]
> > > The downside, only micro ATX and
> > > mini ITX mobo.  This is a serious down vote here.
> >
> > Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
> > you only need one SATA expander (with four or six on-board). Perhaps a fast
> > network card if one is needed, that makes two slots.
> 
> Tend to agree.  The other factor here is that desktop-oriented CPUs
> tend to not have a large number of PCIe lanes free for expansion
> slots, especially if you want 1-2 NVMe slots.  (You also have to watch
> out as the lanes for those can be shared with some of the expansion
> slots so you can't use both.)
> 
> If you want to consider a 10GbE+ card I'd definitely get something
> with integrated graphics,

That is a recommendation in any case. If you are a gamer, you have a 
fallback in case the GPU kicks the bucket. And if not, your power bill goes 
way down.

> because a NIC is going to need a 4-8x port
> most likely 

Really? PCIe 3.0 has 1 GB/s/lane, that is 8 Gbps/lane, so almost as much as 
10 GbE. OTOH, 10 GbE is a major power sink. Granted, 1 GbE is not much when 
you’re dealing with numerous TB. And then there is network over thunderbolt, 
of which I only recently learned. But this is probably very restricted in 
length. Which will also be the case for 10 GbE, so probably no options for 
the outhouse. :D

> > Speaking of RAM; might I interest you in server-grade hardware? The reason
> > being that you can then use ECC memory, which is a nice perk for storage.
> 
> That and way more PCIe lanes.  That said, it seems super-expensive,
> both in terms of dollars, and power use.  Is there any entry point
> into server-grade hardware that is reasonably priced, and which can
> idle at something reasonable (certainly under 50W)?

I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my 
last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several 
watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W 
from the plug at idle — that is after I enabled all powersaving items in 
powertop. Without them, it is around 10 W more. It has two gigabit ports 
(plus IPMI port) and a 300 W 80+ gold PSU.

> > I was going to upgrade my 9 years old Haswell system at some point to a new
> > Ryzen build. Have been looking around for parts and configs for perhaps two
> > years now but I can’t decide (perhaps some remember previous ramblings about
> > that).
> 
> The latest zen generation is VERY nice, but also pretty darn
> expensive.  Going back to zen3 might get you more for the money,
> depending on how big you're scaling up.

I’ve been looking at Zen 3 the whole time, namely the 5700G APU. 5 times the 
performance of my i5, for less power, and good graphics performance for the 
occasional game. I’m a bit paranoid re. Zen 4’s inclusion of Microsoft 
Pluton (“Chip-to-Cloud security”) and Zen 4 in gereral has higher idle 
consumption. But now that Phoenix, the Zen 4 successor to the 5700G, is 
about to become available, I am again hesitant to pull the trigger, waiting 
for the pricetag.

> A big part of the cost of
> zen4 is the motherboard, so if you're building something very high end
> where the CPU+RAM dominates, then zen4 may be a better buy.

I’m fine with middle-class. In fact I always thought i7s to be overpriced 
compared to i5s. The plus in performance of top-tier parts is usually bought 
with disproportionately high power consumption (meaning heat and noise).

> If you just want a low-core system then you're paying a lot just to get
> started.

I want to get the best bang within my constraints, meaning the 5700G (
8 cores). The 5600G (6 cores) is much cheaper, but I want to get the best 
graphics I can get in an APU. And I am always irked by having 6 cores (12 
threads), because it’s not a power of 2, so percentages in load graphs will 
look skewed. :D

> The advantage of
> distributed filesystems is that you can build them out of a bunch of
> cheap boxes […]
> When you start getting up to a dozen drives the cost of getting them
> to all work on a single host starts going up.  You need big cases,
> expansion cards, etc.  Then when something breaks you need to find a
> replacement quickly from a limited pool of options.  If I lose a node
> on my Rook cluster I can just go to newegg and look at $150 used SFF
> PCs, then install the OS and join the cluster and edit a few lines of
> YAML and the disks are getting formatted...

For a simple media storage, I personally would find this too cumbersome to 
manage. Especially if you stick to Gentoo and don’t have a homogeneous 
device pool (not to mention compile times). I’d choose organisational 
simplicity over hardware availability. (My NAS isn’t running for of the 
time, mostly due to power bill, but also t

[gentoo-user] Controlling emerges

2023-09-18 Thread Peter Humphrey
Hello list,

We've had a few discussions here on how to balance the parameters to emerge 
to make the most of the resources available. Here's another idea:

One the one hand, big jobs should be able to use the maximum CPU 
performance and RAM capacity, but on the other we don't want to flood the 
system.

Therefore, I think it would be useful to be able to specify in env and 
package.env that a job should be run on its own - if any other emerge jobs are 
scheduled, wait until they're finished. Combine that with a specific MAKEOPTS, 
and we'd have a more flexible deployment of resouces.

Is this feasible? What have I not thought of?

-- 
Regards,
Peter.






Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Rich Freeman
On Mon, Sep 18, 2023 at 6:13 AM Frank Steinmetzger  wrote:
>
> Am Mon, Sep 18, 2023 at 12:17:20AM -0500 schrieb Dale:
> > […]
> > The downside, only micro ATX and
> > mini ITX mobo.  This is a serious down vote here.
>
> Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
> you only need one SATA expander (with four or six on-board). Perhaps a fast
> network card if one is needed, that makes two slots.

Tend to agree.  The other factor here is that desktop-oriented CPUs
tend to not have a large number of PCIe lanes free for expansion
slots, especially if you want 1-2 NVMe slots.  (You also have to watch
out as the lanes for those can be shared with some of the expansion
slots so you can't use both.)

If you want to consider a 10GbE+ card I'd definitely get something
with integrated graphics, because a NIC is going to need a 4-8x port
most likely (maybe there are expensive ones that use later generations
and fewer lanes).  On most motherboards you may only get one slot with
that kind of bandwidth.

> Speaking of RAM; might I interest you in server-grade hardware? The reason
> being that you can then use ECC memory, which is a nice perk for storage.

That and way more PCIe lanes.  That said, it seems super-expensive,
both in terms of dollars, and power use.  Is there any entry point
into server-grade hardware that is reasonably priced, and which can
idle at something reasonable (certainly under 50W)?

> > I was hoping to turn
> > my current rig into a NAS.  The mobo and such parts.  This won't be a
> > option with this case.  Otherwise, it gives ideas on what I'm looking
> > for.  And not.  ;-)
>
> I was going to upgrade my 9 years old Haswell system at some point to a new
> Ryzen build. Have been looking around for parts and configs for perhaps two
> years now but I can’t decide (perhaps some remember previous ramblings about
> that).

The latest zen generation is VERY nice, but also pretty darn
expensive.  Going back to zen3 might get you more for the money,
depending on how big you're scaling up.  A big part of the cost of
zen4 is the motherboard, so if you're building something very high end
where the CPU+RAM dominates, then zen4 may be a better buy.  If you
just want a low-core system then you're paying a lot just to get
started.

RE NAS: I used to build big boxes with lots of drives on ZFS.  These
days I'm using distributed filesystems (I've migrated from MooseFS to
Ceph, though both have their advantages).  The advantage of
distributed filesystems is that you can build them out of a bunch of
cheap boxes, vs trying to find one box that you can cram a dozen hard
drives into.  They're just much easier to expand.  Plus you get
host-level redundancy.  Ceph is better for HA - I can literally reboot
every host in my network (one at a time) and all my essential services
stay running.  MooseFS performs much better at small scale on hard
drives, but depends on a master node for the FOSS version, so if that
goes down the cluster is down (the locking behavior also seems to have
issues - I've had corruption issues with sqllite and such with it).

When you start getting up to a dozen drives the cost of getting them
to all work on a single host starts going up.  You need big cases,
expansion cards, etc.  Then when something breaks you need to find a
replacement quickly from a limited pool of options.  If I lose a node
on my Rook cluster I can just go to newegg and look at $150 used SFF
PCs, then install the OS and join the cluster and edit a few lines of
YAML and the disks are getting formatted...

> ¹ There was once a time when ECC was supported by all boards and CPUs. But
> then someone invented market segmentation to increase profits through
> upselling.

Yeah, zen1 used to support ECC on most motherboards.  Then later
motherboards dropped support.  Definitely a case of market
segmentation.

This is part of why I like storage implementations that have more
robustness built into the software.  Granted, it is still only as good
as your clients, but with distributed storage I really don't want to
be paying for ECC on all of my nodes.  If the client calculates a
checksum and it remains independent of the data, then any RAM
corruption should be detectable as a mismatch (that of course assumes
the checksum is preserved and not re-calculated at any point).

-- 
Rich



Re: [gentoo-user] Computer case for new build

2023-09-18 Thread Frank Steinmetzger
Am Mon, Sep 18, 2023 at 12:17:20AM -0500 schrieb Dale:
> Howdy,
> […]
> I've found a few cases that peak my interest depending on which way I go
> with this.  One I found that has a lot of hard drive space and would
> make a descent NAS box, the Fractal Design Node 804.  It's a cube shaped
> thing but can hold a LOT of spinning rust. 10 drives plus I think space
> for a SSD for the OS as well.

These days you can always put your OS on an NVMe; faster access and two 
fewer cables in the case (or one more slot for a data drive).

> […]
> The downside, only micro ATX and
> mini ITX mobo.  This is a serious down vote here.

Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives, 
you only need one SATA expander (with four or six on-board). Perhaps a fast 
network card if one is needed, that makes two slots. You don’t get more RAM 
slots with ATX, either. And, if not anything else, a smaller board means 
(or can mean) lower power consumption and thus less heat.

Speaking of RAM; might I interest you in server-grade hardware? The reason 
being that you can then use ECC memory, which is a nice perk for storage.¹ 
Also, the chance is higher to get sufficient SATA connectors on-board (maybe 
in the form of an SFF connector, which is actually good, since it means 
reduced “cable salad”).
AFAIK if you have a Ryzen PRO, then you can also use a consumer-grade board, 
because they too support ECC. And DDR5 has basic (meaning 1 bit and 
transparent to the OS) ECC built-in from the start.

> I was hoping to turn
> my current rig into a NAS.  The mobo and such parts.  This won't be a
> option with this case.  Otherwise, it gives ideas on what I'm looking
> for.  And not.  ;-)

I was going to upgrade my 9 years old Haswell system at some point to a new 
Ryzen build. Have been looking around for parts and configs for perhaps two 
years now but I can’t decide (perhaps some remember previous ramblings about 
that). Now I actually consider buing a tiny Deskmini X300 after I found out 
that it does support ACPI S3, but only with a specific UEFI version. No 
10-gig USB and only 1-gig ethernet though. But it’s cute and small. :)

> Another find.  The Fractal Design Define 7 XL.  This is more of a tower
> type shape like my current rig.  I think I read with extra trays, it can
> hold up to 18 drives.  One could have a fancy RAID setup and still have
> huge storage space with that.  I think it also has SSD spots for drives
> that could hold the OS itself.  This one is quite pricey tho.

With so many drives, you should also include a pricey power supply. And/or a 
server board which supports staggered spin-up. Also, drives of the home NAS 
category (and consumer drives anyways) are only certified for operation in 
groups of up to 8-ish. Anything above and you sail in grey warranty waters. 
Higher-tier drives are specced for the vibrations of so many drives (at 
least I hope, because that’s what they™ tell us).

> To be honest, I kinda like the Fractal Design Define 7
> XL right now despite the higher cost.  I could make a NAS/backup box
> with it and I doubt I'd run out of drive space even if I started using
> RAID and mirrored everything, at a minimum.

With 12 drives, I would go for parity RAID with two parity drives per six 
drives, not for a mirror. That way you get 2/3 storage efficiency vs. 1/2 
and more robustness; in parity, any two drives may fail, but in a cluster of 
mirrors, only specific drives may fail (not two of the same mirror). If the 
drives are huge, nine drives with three parity drives may be even better 
(because rebuilds get scarier the bigger the drives get).

> 9 pairs of say 18TB drives
> would give around 145TBs of storage with a file system on it.

If you mirrored them all, you’d get 147 TiB. But as I said, use nine drives 
with a 3-drive parity and you get 98 TiB per group. With two groups 
(totalling 18 drives), you get 196 TiB. Wh!


¹ There was once a time when ECC was supported by all boards and CPUs. But 
then someone invented market segmentation to increase profits through 
upselling.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Skype jokes are oftentimes not understood, even when they’re repeated.


signature.asc
Description: PGP signature


Re: [gentoo-user] Certain packages refuse to use binary from build save, the -k thing.

2023-09-18 Thread Frank Steinmetzger
Am Fri, Sep 15, 2023 at 11:44:03PM -0500 schrieb Dale:
> Howdy,

Hi

instead of going berserk mode and wasting kWh on rebuilding “just in case it 
might help”, why not try and dig a little deeper.

> A couple of my video players are not playing videos correctly.

Whicch players?

> I've
> rebuilt a few things but no change.  Some work fine, some give a error
> about a bad index or something, some just don't even try at all.

What is the exact error message?
Can you remux the file with ffmpeg?
`ffmpeg -i Inputfile.ext -c copy -map 0 Outputfile.ext`
This will take all streams of the input file and put them into a new file 
without re-encoding. Does ffmpeg give any errors about bad data? Can you 
play the produced file?

> The videos come from various sources and are of different file extensions. 

Can we have a look at some of those videos? The least I can try is to see 
whether they work here or show any sign of corruption.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Polymorphism is a multiform topic.  (SelfHTML forum)


signature.asc
Description: PGP signature